[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC XEN PATCH v2 1/3] xen/vpci: Clear all vpci status of device


  • To: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Date: Mon, 4 Dec 2023 06:57:03 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZOrhusJrXdhP+AeYPG48uTopMy0T2Czd78VoqrPme8A=; b=VW5ji0/BCNw35AKqJHo7wJZAZjJq08QOV4mBaUoHwJlida7L8p48VnGkVw525CLNBcSdwm3iIIShHF/D0BvM39CIwcPNisdxbyyO8fSinTmRmBs51PpFdQcUX1fYTY7rtXevmt5g+OgPB4Blzhhsi8S0hRUO9jJ2/EAP/E0PrZAeC2SdUC13q/LFvqtc7tpAISvsMJ25WATLHcbOOp0Y0xQDUhnltnjdZNL++NYYSa3y6tAe996FzWo1cxnFw/LV0niD0RaKqxNU+RrRhx20nx2+cf7JBQGfP8dHFi/Fyi1JerdgY6+hH94rdm0RV30z0OKBDNyCnxsCbyUWiEJwlw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VW81TKryDoeOPkUx1mwJpIoQeuY91wxR3a5aoj81njCAKEAZWEOTPuDx+U3DWScntpvdFFvFIKAl3TvEPnUDEEqNs9nZ/bmZrigVG1xCsnomxQUcpkiETB8UB+aWHxhj7htgq/IagWOd2qOKUcV+cn53rz+RfWpiK9JyVU50M4YwGOQe24WNI2LXcd+JeYnwMsNStW3YWu0699wx39K8po68+7T1HZPKEmXKz+EO5y1C/sk8sagFMl+C+8HHkU4OfMl3mj5978eZ6ED4iUPDR1a/W/7GJyqLnMK+rI/suCMMEJzOsnO+9Xa/o1Uc159NYPDqIXI2V+g7Ow0YMt1tfQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Hildebrand, Stewart" <Stewart.Hildebrand@xxxxxxx>, "Deucher, Alexander" <Alexander.Deucher@xxxxxxx>, "Ragiadakou, Xenia" <Xenia.Ragiadakou@xxxxxxx>, "Stabellini, Stefano" <stefano.stabellini@xxxxxxx>, "Huang, Ray" <Ray.Huang@xxxxxxx>, "Huang, Honglei1" <Honglei1.Huang@xxxxxxx>, "Zhang, Julia" <Julia.Zhang@xxxxxxx>, "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Delivery-date: Mon, 04 Dec 2023 06:57:28 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHaHsLfq+dfGUvRWUqPIEH3O9fXrbCPyyeAgAMgFYD//+e9gIAABBEAgAAlHwCABknUAA==
  • Thread-topic: [RFC XEN PATCH v2 1/3] xen/vpci: Clear all vpci status of device

Hi Daniel P. Smith,

On 2023/11/30 22:52, Roger Pau Monné wrote:
> On Thu, Nov 30, 2023 at 07:39:38AM -0500, Daniel P. Smith wrote:
>> On 11/30/23 07:25, Daniel P. Smith wrote:
>>> On 11/30/23 01:22, Chen, Jiqian wrote:
>>>> Hi Roger and Daniel P. Smith,
>>>>
>>>> On 2023/11/28 22:08, Roger Pau Monné wrote:
>>>>> On Fri, Nov 24, 2023 at 06:41:34PM +0800, Jiqian Chen wrote:
>>>>>> When a device has been reset on dom0 side, the vpci on Xen
>>>>>> side won't get notification, so the cached state in vpci is
>>>>>> all out of date compare with the real device state.
>>>>>> To solve that problem, this patch add new hypercall to clear
>>>>>> all vpci device state. And when reset device happens on dom0
>>>>>> side, dom0 can call this hypercall to notify vpci.
>>>>>>
>>>>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@xxxxxxx>
>>>>>> Signed-off-by: Huang Rui <ray.huang@xxxxxxx>
>>>>>> ---
>>>>>>   xen/arch/x86/hvm/hypercall.c  |  1 +
>>>>>>   xen/drivers/passthrough/pci.c | 21 +++++++++++++++++++++
>>>>>>   xen/drivers/pci/physdev.c     | 14 ++++++++++++++
>>>>>>   xen/drivers/vpci/vpci.c       |  9 +++++++++
>>>>>>   xen/include/public/physdev.h  |  2 ++
>>>>>>   xen/include/xen/pci.h         |  1 +
>>>>>>   xen/include/xen/vpci.h        |  6 ++++++
>>>>>>   7 files changed, 54 insertions(+)
>>>>>>
>>>>>> diff --git a/xen/arch/x86/hvm/hypercall.c
>>>>>> b/xen/arch/x86/hvm/hypercall.c
>>>>>> index eeb73e1aa5..6ad5b4d5f1 100644
>>>>>> --- a/xen/arch/x86/hvm/hypercall.c
>>>>>> +++ b/xen/arch/x86/hvm/hypercall.c
>>>>>> @@ -84,6 +84,7 @@ long hvm_physdev_op(int cmd,
>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>       case PHYSDEVOP_pci_mmcfg_reserved:
>>>>>>       case PHYSDEVOP_pci_device_add:
>>>>>>       case PHYSDEVOP_pci_device_remove:
>>>>>> +    case PHYSDEVOP_pci_device_state_reset:
>>>>>>       case PHYSDEVOP_dbgp_op:
>>>>>>           if ( !is_hardware_domain(currd) )
>>>>>>               return -ENOSYS;
>>>>>> diff --git a/xen/drivers/passthrough/pci.c
>>>>>> b/xen/drivers/passthrough/pci.c
>>>>>> index 04d00c7c37..f871715585 100644
>>>>>> --- a/xen/drivers/passthrough/pci.c
>>>>>> +++ b/xen/drivers/passthrough/pci.c
>>>>>> @@ -824,6 +824,27 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>>>>>>       return ret;
>>>>>>   }
>>>>>> +int pci_reset_device_state(u16 seg, u8 bus, u8 devfn)
>>>>>
>>>>> You could use pci_sbdf_t here instead of 3 parameters.
>>>> Will change in next version, thank you.
>>>>
>>>>>
>>>>> I'm however unsure whether we really need this helper just to fetch
>>>>> the pdev and call vpci_reset_device_state(), I think you could place
>>>>> this logic directly in pci_physdev_op().  Unless there are plans to
>>>>> use such logic outside of pci_physdev_op().
>>>> If I place the logic of pci_reset_device_state directly in
>>>> pci_physdev_op. I think I need to declare vpci_reset_device_state in
>>>> pci.h? If it is suitable, I will change in next version.
>>>>
>>>>>
>>>>>> +{
>>>>>> +    struct pci_dev *pdev;
>>>>>> +    int ret = -ENODEV;
>>>>>
>>>>> Some XSM check should be introduced fro this operation, as none of the
>>>>> existing ones is suitable.  See xsm_resource_unplug_pci() for example.
>>>>>
>>>>> xsm_resource_reset_state_pci() or some such I would assume.
>>>> I don't know what I should do in XSM function(assume it is
>>>> xsm_resource_reset_state_pci).
>>>> Hi Daniel P. Smith, could you please give me some suggestions?
>>>> Just to check the XSM_PRIV action?
>>>>
>>>
>>> Roger, thank you for seeing this and adding me in!
>>>
>>> Jiqian, I just wanted to let you know I have seen your post but I have
>>> been a little tied up this week. Just with the cursory look, I think
>>> Roger is suggesting a new XSM check/hook is warranted.
>>>
>>> If you would like to attempt at writing the dummy policy side, mimic
>>> xsm_resource_plug_pci in xen/include/xsm/dummy.h and
>>> xen/include/xsm/xsm.h, then I can look at handling the FLASK portion
>>> next week and provide it to you for inclusion into the series. If you
>>> are not comfortable with it, I can look at the whole thing next week.
>>> Just let me know what you would be comfortable with.
>>
>> Apologies, thinking about for a moment and was thinking the hook should be
>> called xsm_resource_config_pci. I would reset as a config operation and
>> there might be new ones in the future. I do not believe there is a need to
>> have fine grain access control down to individual config operation, thus
>> they could all be captured under this one hook. Roger, what are your
>> thoughts about this, in particular how you see vpci evolving?
> 
> So the configuration space reset should only be done by the domain
> that's also capable of triggering the physical device reset, usually
> the hardware domain.  I don't think it's possible ATM to allow a
> domain different than the hardware domain to perform a PCI reset, as
> doing it requires unmediated access to the PCI config space.
> 
> So resetting the vPCI state should either be limited to the hardware
> domain, or to a pci reset capability explicitly
> (xsm_resource_reset_pci or some such?).  Or maybe
> xsm_resource_config_pci if that denotes unmediated access to the PCI
> config space.
> 
> Thanks, Roger.

Is it like below that I need to add for XSM?
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 9d7519eb89..81ceaf145f 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -937,6 +937,10 @@ int pci_reset_device_state(u16 seg, u8 bus, u8 devfn)
     struct pci_dev *pdev;
     int ret = -ENODEV;

+    ret = xsm_resource_config_pci(XSM_PRIV, (seg << 16) | (bus << 8) | devfn);
+    if ( ret )
+        return ret;
+
     pcidevs_lock();

     pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 8671af1ba4..bcaff99b23 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -472,6 +472,13 @@ static XSM_INLINE int cf_check xsm_resource_setup_pci(
     return xsm_default_action(action, current->domain, NULL);
 }

+static XSM_INLINE int cf_check xsm_resource_config_pci(
+    XSM_DEFAULT_ARG uint32_t machine_bdf)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, NULL);
+}
+
 static XSM_INLINE int cf_check xsm_resource_setup_gsi(XSM_DEFAULT_ARG int gsi)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 8dad03fd3d..1cb16b00de 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -571,6 +571,12 @@ static inline int xsm_resource_setup_pci(
     return alternative_call(xsm_ops.resource_setup_pci, machine_bdf);
 }

+static inline int xsm_resource_config_pci(
+    xsm_default_t def, uint32_t machine_bdf)
+{
+    return alternative_call(xsm_ops.resource_config_pci, machine_bdf);
+}
+
 static inline int xsm_resource_setup_gsi(xsm_default_t def, int gsi)
 {
     return alternative_call(xsm_ops.resource_setup_gsi, gsi);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index e6ffa948f7..7a289ba5d8 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -91,6 +91,7 @@ static const struct xsm_ops __initconst_cf_clobber dummy_ops 
= {
     .resource_plug_pci             = xsm_resource_plug_pci,
     .resource_unplug_pci           = xsm_resource_unplug_pci,
     .resource_setup_pci            = xsm_resource_setup_pci,
+    .resource_config_pci            = xsm_resource_config_pci,
     .resource_setup_gsi            = xsm_resource_setup_gsi,


-- 
Best regards,
Jiqian Chen.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.