[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 11/11] xen/arm: Process pending vPCI map/unmap operations


  • To: Julien Grall <julien@xxxxxxx>, Oleksandr Andrushchenko <andr2000@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Mon, 6 Sep 2021 09:14:06 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=6req6beNU/YGZj4m82FSm1lkYHxl5L6WrIjISmnzq9I=; b=JnAad68rzEAwhiClHf6VKuhwnr2kvoYijPnUOadXRysC5mBo4JzEPgj1bkvMD4jQclsYaroXv4EzH+tAw8OqFVT817YLm+b8w8AnsmzAj8W/Mr94/1lojJtwKP6Pi6qWRqT385Jzj3EQO2/nfCxEtybAse6dWX1rQRzL0B1bIBxXICO0cRaVpV62HxiPFIBuw9/M1uK19/7SwkOD2bMws62nAyYhwjF3lTEuC/R3EkZql1g9EWjJKaaS0CPcf7NnWm6OXBK9+JDizQkb85DMiBmqBFNG26T6Ppk3r6sUiN4TvsFdZd2FZZNFkVcPlUQiktWPISkGYnhksodKgfIhIg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dxJo+4xILY1PE3eZqZCVGaGq+uaQznYVxCBenToiXsKsRTzu3ugXq0n+zrztyOQ1D43m9/vCBl3ARXd1m6LHkfI/gr2y/HxeQxVAURGyh73e2bnNaHzmCEFhi10QTJ3RZtxozP/k8KClb4PET7q24ElXrLS+yEvtJBzTTlMSYSzNkdcrtGti1CN18UjuTj2k9VIk/kgrQc+LogGdo5AFsuWgd7c+1+DrjkuPnzw/jF9gg+FTNcfMCQYWiYJw37GtgMiEWhLHBPREkYJ2oOi7Je90N3tp7DREh+syRXfW5vjz2R/qJk3+UTnUZRDfsGmdMidB/TvpJuqrhNp0vnuGCg==
  • Authentication-results: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
  • Cc: "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, "roger.pau@xxxxxxxxxx" <roger.pau@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>
  • Delivery-date: Mon, 06 Sep 2021 09:14:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXoJ51zxJWpxhHEEGXNqYYBqazw6uSBESAgASU4wCAAB15AIAABzeA
  • Thread-topic: [PATCH 11/11] xen/arm: Process pending vPCI map/unmap operations

On 06.09.21 11:48, Julien Grall wrote:
> On 06/09/2021 08:02, Oleksandr Andrushchenko wrote:
>> Hi, Julien!
>
> Hi Oleksandr,
>
>> On 03.09.21 12:04, Julien Grall wrote:
>>> Hi Oleksandr,
>>>
>>> On 03/09/2021 09:33, Oleksandr Andrushchenko wrote:
>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
>>>>
>>>> vPCI may map and unmap PCI device memory (BARs) being passed through which
>>>> may take a lot of time. For this those operations may be deferred to be
>>>> performed later, so that they can be safely preempted.
>>>> Run the corresponding vPCI code while switching a vCPU.
>>>
>>> IIUC, you are talking about the function map_range() in 
>>> xen/drivers/vpci/header. The function has the following todo for Arm:
>>>
>>>          /*
>>>           * ARM TODOs:
>>>           * - On ARM whether the memory is prefetchable or not should be 
>>> passed
>>>           *   to map_mmio_regions in order to decide which memory attributes
>>>           *   should be used.
>>>           *
>>>           * - {un}map_mmio_regions doesn't support preemption.
>>>           */
>>>
>>> This doesn't seem to be addressed in the two series for PCI passthrough 
>>> sent so far. Do you have any plan to handle it?
>>
>> No plan yet.
>>
>> All the mappings are happening with p2m_mmio_direct_dev as of now.
>
> So this address the first TODO but how about the second TODO? It refers to 
> the lack of preemption on Arm but in this patch you suggest there are some 
> and hence we need to call vpci_process_pending().
>
> For a tech preview, the lack of preemption would be OK. However, the commit 
> message should be updated to make clear there are no such preemption yet or 
> avoid to mention it.

Well, the comment was not added by me (by Roger I guess), I just keep it.

As to the preemption both map and unmap are happening via vpci_process_pending, 
so

what is true for map is also true for unmap with this respect

>
>>
>>>
>>>>
>>>> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
>>>> ---
>>>>    xen/arch/arm/traps.c | 6 ++++++
>>>>    1 file changed, 6 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>> index 219ab3c3fbde..1571fb8afd03 100644
>>>> --- a/xen/arch/arm/traps.c
>>>> +++ b/xen/arch/arm/traps.c
>>>> @@ -34,6 +34,7 @@
>>>>    #include <xen/symbols.h>
>>>>    #include <xen/version.h>
>>>>    #include <xen/virtual_region.h>
>>>> +#include <xen/vpci.h>
>>>>      #include <public/sched.h>
>>>>    #include <public/xen.h>
>>>> @@ -2304,6 +2305,11 @@ static bool check_for_vcpu_work(void)
>>>>        }
>>>>    #endif
>>>>    +    local_irq_enable();
>>>> +    if ( has_vpci(v->domain) && vpci_process_pending(v) )
>>>
>>> Looking at the code of vpci_process_pending(), it looks like there are some 
>>> rework to do for guest. Do you plan to handle it as part of the vPCI series?
>> Yes, vPCI code is heavily touched to support guest non-identity mappings
>
> I wasn't referring to the non-identity mappings here. I was referring to 
> TODOs such as:
>
>             /*
>              * FIXME: in case of failure remove the device from the domain.
>              * Note that there might still be leftover mappings. While this is
>              * safe for Dom0, for DomUs the domain will likely need to be
>              * killed in order to avoid leaking stale p2m mappings on
>              * failure.
>              */
>
> You still have them after the series reworking the vPCI. As for the 
> preemption this is OK to ignore it for a tech preview. Although, we want to 
> at least track them.
Please see above: both map and unmap are happening via vpci_process_pending
>
>>>
>>>> + raise_softirq(SCHEDULE_SOFTIRQ);
>>>> +    local_irq_disable();
>>>> +
>>>
>>>  From my understanding of vcpi_process_pending(). The function will return 
>>> true if there are more work to schedule.
>> Yes
>>> However, if check_for_vcpu_for_work() return false, then we will return to 
>>> the guest before any work for vCPI has finished. This is because 
>>> check_for_vcpu_work() will not be called again.
>> Correct
>>>
>>> In this case, I think you want to return as soon as you know we need to 
>>> reschedule.
>> Not sure I understand this
>
I was more referring to "I think you want to return as soon as you know we need 
to reschedule."
> The return value of check_for_vcpu_for_work() indicates whether we have more 
> work to do before returning to return to the guest.
>
> When vpci_process_pending() returns true, it tells us we need to call the 
> function at least one more time before returning to the guest.
>
> In your current implementation, you leave that decision to whoeever is next 
> in the function.
>
> It is not safe to return to the guest as long as vpci_process_pending() 
> returns true. So you want to write something like:
>
> if ( vpci_process_pending() )
>   return true;
--- a/xen/arch/arm/traps.c

+++ b/xen/arch/arm/traps.c
@@ -2291,6 +2291,9 @@ static bool check_for_vcpu_work(void)
  {
      struct vcpu *v = current;

+    if ( vpci_process_pending() )
+      return true;
+
  #ifdef CONFIG_IOREQ_SERVER
      if ( domain_has_ioreq_server(v->domain) )
      {
Do you mean something like this?


>
>>>
>>> However, looking at the rest of the code, we already have a check for vpci 
>>> in the common IOREQ code.
>>
>> Which may not be enabled as it depends on CONFIG_IOREQ_SERVER.
>
> Right. My point is when CONFIG_IOREQ_SERVER is set then you would end up to 
> call twice vpci_process_pending(). This will have an impact how on long your 
> vCPU is going to running because you are doubling the work.

So, you suggest that we have in the common IOREQ code something call like

arch_vpci_process_pending? In case of x86 it will have the code currently found 
in the

common IOREQ sources and for Arm it will be nop?

Any better suggestion for the name?

>
>>
>> My understanding is that for x86 it is always enabled, but this might not be 
>> the case for Arm
>>
>>> So we would end up to call twice vpci_process_pending().
>> So, if CONFIG_IOREQ_SERVER is not enabled then we end up with only calling 
>> it from traps.c on Arm
>>> Maybe we should move the call from the IOREQ to arch-code.
>>
>> Hm. I would better think of moving it from IOREQ to some other common code: 
>> for x86 (if
>>
>> my understanding correct about CONFIG_IOREQ_SERVER) it is by coincidence 
>> that we call vPCI
>>
>> code from there and IOREQ is always enabled.
>
> I am not aware of another suitable common helper that would be called on the 
> return to the guest path. Hence why I suggest to possibly duplicated the code 
> in each arch path.
I see
>
> Cheers,
>

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.