[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN PATCH v13 2/6] x86/pvh: Allow (un)map_pirq when dom0 is PVH


  • To: Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Date: Tue, 3 Sep 2024 04:01:39 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CyaUbNZmBdfY1pvrJSgvFLG/ctwatjZMhcDNhSj/nWk=; b=HpBLKEcYqlDQkDHPU6iAtNuRig6TPC3Tn9YY6qFiMo+h/fQtJg7ELJLF6eWOuTDqkgXJur7ZRsfisGTpf+q9SVidJOdvXkMc9LUjcGFD42TtrYVI9E4XRuyDrRhufRXAtAm0FHarW6gEDu2BDmdwnWDAcNdIAXYkmYze60y4Hf1Yc8foBjobS+3jCyGvWUgP6va5tTKyud7ypKk5nfpQ8scB93jYgdtVLf/J9AVOWv8jwLgod3H3AI3gl5JqSOT0nFoK24iq8vDl59TxbBc5+rs7qTsENJ7F38YyoLmjqrel9/AcNYBMcrQd3G6nwC/Cm6QIdLt+ZqI9YdHhnETghQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ETWdZvRNinGxfkzkulwhZ1/4dMKpzLSrjB7XI8VWXtEAq8i/7v429CxRNx7pmAMvta51o5UhFqIWEXRUO9CBDPdYxqQvGOkkQEE4vBmFgehHn9ZszDqi0UWetyDq86WY/nha3SKO5HJ3vYRKPjROyjvBYrmNt17gma0o4fdOR9FsCIry+FgEa1rodFvDFAxXd3Jre6c1mBOfRWsBY7k4IMwtkoPkjhvXBa0m4NGRR1bL3HBsssPnrpMLTqhWD5vYDNAx+T8G8Pyix9mnhlpeHMpzmjIpcUaPZynG5c31GwmZvIMYODhML1N8MrtVbEA0bi2DiQUZROBxhHonl737RA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <gwd@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Anthony PERARD <anthony@xxxxxxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, "Daniel P . Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, "Hildebrand, Stewart" <Stewart.Hildebrand@xxxxxxx>, "Huang, Ray" <Ray.Huang@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Delivery-date: Tue, 03 Sep 2024 04:02:10 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHa78ytu3Dw7+A5rUaTPQAvAsIAxbIuTvuAgAHk7QD//4uogIAWUUQA
  • Thread-topic: [XEN PATCH v13 2/6] x86/pvh: Allow (un)map_pirq when dom0 is PVH

On 2024/8/20 15:07, Jan Beulich wrote:
> On 20.08.2024 08:12, Chen, Jiqian wrote:
>> On 2024/8/19 17:08, Jan Beulich wrote:
>>> On 16.08.2024 13:08, Jiqian Chen wrote:
>>>> If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
>>>> a passthrough device by using gsi, see qemu code
>>>> xen_pt_realize->xc_physdev_map_pirq and libxl code
>>>> pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
>>>> will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
>>>> is not allowed because currd is PVH dom0 and PVH has no
>>>> X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.
>>>>
>>>> So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
>>>> iPHYSDEVOP_unmap_pirq for the removal device path to unmap pirq.
>>>> So that the interrupt of a passthrough device can be successfully
>>>> mapped to pirq for domU with a notion of PIRQ when dom0 is PVH.
>>>>
>>>> To exposing the functionality to wider than (presently) necessary
>>>> audience(like PVH domU), so it doesn't add any futher restrictions.
>>>
>>> The code change is fine, but I'm struggling with this sentence. I can't
>>> really derive what you're trying to say.
>> Ah, I wanted to explain why this path not add any further restrictions, then 
>> used your comments of last version.
>> How do I need to change this explanation?
> 
> I think you want to take Roger's earlier comments (when he requested
> the relaxation) as basis to re-write (combine) both of the latter two
> paragraphs above (or maybe even all three of them). It's odd to first
> talk about Dom0, as if the operations were to be exposed just there,
> and only then add DomU-s.

I tried to understand and summarize Roger's previous comments and changed 
commit message to the following. Do you think it is fine?

x86/pvh: Allow (un)map_pirq when dom0 is PVH

When dom0 is PVH type and passthrough a device to HVM domU, Qemu code
xen_pt_realize->xc_physdev_map_pirq and libxl code pci_add_dm_done->
xc_physdev_map_pirq map a pirq for passthrough devices.
In xc_physdev_map_pirq call stack, function hvm_physdev_op has a check
has_pirq(currd), but currd is PVH dom0, PVH has no X86_EMU_USE_PIRQ flag,
so it fails, PHYSDEVOP_map_pirq is not allowed for PVH dom0 in current
codes.

But it is fine to map interrupts through pirq to a HVM domain whose
XENFEAT_hvm_pirqs is not enabled. Because pirq field is used as a way to
reference interrupts and it is just the way for the device model to
identify which interrupt should be mapped to which domain, however
has_pirq() is just to check if HVM domains route interrupts from
devices(emulated or passthrough) through event channel, so, the has_pirq()
check should not be applied to the PHYSDEVOP_map_pirq issued by dom0.

And the PVH domU which use vpci trying to issue a map_pirq will fail at the
xsm_map_domain_pirq() check in physdev_map_pirq() .

So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
PHYSDEVOP_unmap_pirq for the removal device path to unmap pirq. Then the
interrupt of a passthrough device can be successfully mapped to pirq for domU.

> 
>>>> And there already are some senarios for domains without
>>>> X86_EMU_USE_PIRQ to use these functions.
>>>
>>> Are there? If so, pointing out an example may help.
>> If I understand correctly, Roger mentioned that PIRQs is disable by default 
>> for HVM guest("hvm_pirq=0") and passthrough device to guest.
>> In this scene, guest doesn't have PIRQs, but it still needs this hypercall.
> 
> In which case please say so in order to be concrete, not vague.
> 
> Jan

-- 
Best regards,
Jiqian Chen.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.