[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/virtio: Handle cases when page offset > PAGE_SIZE properly


  • To: Xenia Ragiadakou <burzalodowa@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>
  • From: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>
  • Date: Sat, 8 Oct 2022 13:21:44 +0000
  • Accept-language: en-US, ru-RU
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=b/klni/RAQWHxR2tnxUhtEL4A/iXWkUQqH1vnYdMMXU=; b=C7GGcx4BDK30tJgUVtPR1x4Pn/YFRczh6MnuzwMfcNDrVgxocWwVZmGhmTtCTJAB+4rgoN0iCmNK4OnpCC7Vr8DlthoUJeJRmWLXS2OQpC+oe6qlC7LPrKWhF09KIzHbavb6qc1KOiI8ifjfNEuqaYE6UzNZ88yF3fsresAmjGoNI4HsctyLrd9dNRlxfiEQAvbCEudZ1BDcN0qWIWhc8IbEN983r0oNznyWO9awowXO7JxOsbNv8AMePnReOdPkmn2hFIT75tW578Lb2ZowHR0vJsZ1SPUIYnUZ3AWKQJds0EJrqJAKBJiZcssvaMazvQAPCazOSOwCAVjLSQq/Hg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H9+QBfVhD+7J/FssQFg/sq7W9OwULWjtTX5vW7PRWTS4uzEfCe80oGx0b+0huKgDWgeCCMIsdBxd2QbMC0N7CLbXXnKK1aJmhgyBkE97lrzpDtxCFJ/BXQFrXiybIjipr9xkbkpiRR/LECV/cklE5WF9mZFF2jt1MAYe4T57XSmQOHzqAzVl9e41p0dDRFcOLLz10mxWEkSWiwy7x0vt8jdus4vmTDcZis1pow+9rFuej3wnSZC9xNuqo16loHYlwujBBhfNGlNmkTMl6DPWeUt9AVaD2ENS90vRkBJWEjLyB4AoTlX+DichO5Sl60huV/7dj7HH4LJVs9578Gegzg==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>
  • Delivery-date: Sat, 08 Oct 2022 13:22:07 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHY2lCewf5wqMeDhkqfFwKHpttMBq4EV/CAgAAc7QCAAAIqgIAABikA
  • Thread-topic: [PATCH] xen/virtio: Handle cases when page offset > PAGE_SIZE properly

On 08.10.22 15:59, Xenia Ragiadakou wrote:

Hello Xenia

>
> On 10/8/22 15:52, Oleksandr Tyshchenko wrote:
>>
>> On 08.10.22 14:08, Xenia Ragiadakou wrote:
>>
>> Hello Xenia
>>
>>>
>>> On 10/7/22 16:27, Oleksandr Tyshchenko wrote:
>>>
>>> Hi Oleksandr
>>>
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>>>>
>>>> Passed to xen_grant_dma_map_page() offset in the page
>>>> can be > PAGE_SIZE even if the guest uses the same page granularity
>>>> as Xen (4KB).
>>>>
>>>> Before current patch, if such case happened we ended up providing
>>>> grants for the whole region in xen_grant_dma_map_page() which
>>>> was really unnecessary. The more, we ended up not releasing all
>>>> grants which represented that region in xen_grant_dma_unmap_page().
>>>>
>>>> Current patch updates the code to be able to deal with such cases.
>>>>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>>>> ---
>>>> Cc: Juergen Gross <jgross@xxxxxxxx>
>>>> Cc: Xenia Ragiadakou <burzalodowa@xxxxxxxxx>
>>>>
>>>> Depens on:
>>>> https://urldefense.com/v3/__https://lore.kernel.org/xen-devel/20221005174823.1800761-1-olekstysh@xxxxxxxxx/__;!!GF_29dbcQIUBPA!xnkNaKpfZ4LssQJcJs_J91KERZKMP2Rd-xEdBqXNXJ8GyCXJ0gkRer1elVYfxOWtwN_FOl9tVieDWlfN-UZaHQsyLMhA$
>>>>  
>>>>
>>>> [lore[.]kernel[.]org]
>>>>
>>>> Should go in only after that series.
>>>> ---
>>>>    drivers/xen/grant-dma-ops.c | 8 +++++---
>>>>    1 file changed, 5 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
>>>> index c66f56d24013..1385f0e686fe 100644
>>>> --- a/drivers/xen/grant-dma-ops.c
>>>> +++ b/drivers/xen/grant-dma-ops.c
>>>> @@ -168,7 +168,9 @@ static dma_addr_t xen_grant_dma_map_page(struct
>>>> device *dev, struct page *page,
>>>>                         unsigned long attrs)
>>>>    {
>>>>        struct xen_grant_dma_data *data;
>>>> -    unsigned int i, n_pages = PFN_UP(offset + size);
>>>> +    unsigned long dma_offset = offset_in_page(offset),
>>>> +            gfn_offset = PFN_DOWN(offset);
>>>> +    unsigned int i, n_pages = PFN_UP(dma_offset + size);
>>>
>>> IIUC, the above with a later patch will become:
>>>
>>> dma_offset = xen_offset_in_page(offset)
>>> gfn_offset = XEN_PFN_DOWN(offset)
>>> n_pages = XEN_PFN_UP(dma_offset + size)
>>
>>
>> If saying "later" patch you meant "xen/virtio: Convert
>> PAGE_SIZE/PAGE_SHIFT/PFN_UP to Xen counterparts" then yes, exactly.
>
> Ah ok, I see.
>
>>>
>>>
>>>>        grant_ref_t grant;
>>>>        dma_addr_t dma_handle;
>>>>    @@ -187,10 +189,10 @@ static dma_addr_t
>>>> xen_grant_dma_map_page(struct device *dev, struct page *page,
>>>>          for (i = 0; i < n_pages; i++) {
>>>>            gnttab_grant_foreign_access_ref(grant + i,
>>>> data->backend_domid,
>>>> -                xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
>>>> +                xen_page_to_gfn(page) + i + gfn_offset, dir ==
>>>> DMA_TO_DEVICE);
>>>
>>> Here, why the pfn is not calculated before passing it to pfn_to_gfn()?
>>> I mean sth like pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)
>>
>> The gfn_offset is just a const value here, which just means how many
>> gfns we should skip. But ...
>>
>> ... I think, I get your point. So, if the region which is contiguous in
>> pfn might be non-contiguous in gfn (which seems to be the case for x86's
>> PV, but I may mistake) we should indeed use open-coded
>>
>> construction "pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)".  And
>> the gfn_offset should be renamed to pfn_offset then.
>>
>>
>> Correct?
>
> Yes, that 's what I had in mind unless I 'm missing sth.


ok, thanks for confirming. So I will create V2 then.


>
>>>
>>>>        }
>>>>    -    dma_handle = grant_to_dma(grant) + offset;
>>>> +    dma_handle = grant_to_dma(grant) + dma_offset;
>>>>          return dma_handle;
>>>>    }
>>>
>
-- 
Regards,

Oleksandr Tyshchenko

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.