[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV drivers and zero copying



(+ Joao)

On 31/07/17 09:34, Oleksandr Andrushchenko wrote:
Hi, all!

Hi Oleksandr,

The aim of this mail is to highlight and discuss possible approaches to
implementing zero copying for PV drivers. Rationale behind this is that
there
are use-cases when drivers operate with big shared buffers, e.g.
display, when
memory copying from front’s buffer into back’s one may significantly hit
performance of the system (for example, for para-virtual display running
at full
HD resolution at 60Hz it is approximately 475MB/sec).

Assumptions (which actually fit ARM platforms, but can be extended to other
platforms as well): Dom0 is a 1:1 mapped privileged domain, runs backend
driver/software DomU is an unprivileged domain without 1:1 memory
mapping, runs
frontend driver

I would rather avoid to stick with this assumption on ARM. This was only meant to be a workaround for platform without IOMMU (see [1]) and we will get into trouble when using IOMMU.

For instance, there are no requirement to have the IOMMU supporting as many as address bits than the processor. So 1:1 mapping here will not be an option.


Buffer origin: while implementing zero copying the buffer allocation can
happen
either on DomU’s end or Dom0’s one depending on the use-case and HW
capabilities/availability: When DomU allocates: It cannot guarantee
physical
memory continuity of the buffers allocated Dom0’s HW *can* handle
non-contiguous
memory buffers allocated by DomU for memory operations (DMA, for
example), e.g.
either with IOMMU help or by any other means (HW block’s own MMU).  When
Dom0
allocates as it is mapped 1:1 it can allocate physically contiguous memory
Dom0’s HW *cannot* handle non-contiguous memory buffers allocated by
DomU for
memory operations by any means.

I am not sure to follow this. How zero copy is related to 1:1 mapping? Is it because you have hardware that does not support scatter/gather IO or IOMMU?


1 Sharing with granted references
==================================

1-1 Buffer allocated @DomU
--------------------------
@DomU
    alloc_xenballooned_pages(nr_pages, pages);
    cur_ref = gnttab_claim_grant_reference(&priv_gref_head);
    gnttab_grant_foreign_access_ref(cur_ref, otherend_id, ...);
    <pass grant_ref_t[] to Dom0>
@Dom0
    alloc_xenballooned_pages(nr_pages, pages);
    gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map |
GNTMAP_device_map,
        grefs[i], otherend_id);
    gnttab_map_refs(map_ops, NULL, pages, nr_pages);

1-2 Buffer allocated @Dom0
--------------------------
@Dom0
    <the code below is equivalent to xen_alloc_ballooned_pages without
     PV MMU support as seen in the balloon driver, the difference is that
     pages are explicitly allocated to be used for DMA>
    dma_alloc_wc(dev, size, &dev_addr, GFP_KERNEL | __GFP_NOWARN);
    HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation);
    cur_ref = gnttab_claim_grant_reference(&priv_gref_head);
    gnttab_grant_foreign_access_ref(cur_ref, otherend_id, ...);
    <pass grant_ref_t[] to DomU>
@Dom0
    alloc_xenballooned_pages(nr_pages, pages);
    gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map, grefs[i],
otherend_id);
    gnttab_map_refs(map_ops, NULL, pages, nr_pages);

2 Sharing with page transfers (GNTTABOP_transfer)
==================================================
FIXME: This use-case seems to be only needed while allocating physically
contiguous buffers at Dom0. For the reverse path 1-1 method can be used.

This approach relies on GNTTABOP_transfer API: “transfer <frame> to a
foreign
domain. The foreign domain has previously registered its interest in the
transfer via <domid, ref>”, for full documentation see [1]. The process of
transferring pages is explained by Christopher Clark at [2] and is
available as
implementation at [3], [4]. The relevant logic is in:
xen/common/grant_table.c :
gnttab_transfer.

Basic workflow explained to me by Christopher:
- The mfn starts as owned by the sending domain, and that domain removes
any
  mappings of it from its page tables. Xen will enforce that the
reference count
must be low enough for the transfer to succeed.
- The receiving domain indicates interest for receiving a page by
writing an
  entry in its grant table.
- You'll need to communicate the grant ref from the receiver to the
sender (eg.
  via xenstore or another existing channel)
- The sending domain invokes the hypercall, with the grant ref from the
  receiving domain.
- The sending domain notifies the receiving domain somehow that the
transfer has
  completed. (eg. send an event or via xenstore)
- Once the transfer has completed, the receiving domain will need to map
the
  newly assigned page.
- Note: For the transfer, the receiving domain must have enough headroom to
  receive the new page, which means it must not have allocated all of
its memory
quota already prior to the transfer. Typically this can be ensured by
freeing
enough memory back to Xen before writing the grant ref.

3 Sharing with page exchange (XENMEM_exchange)
==============================================

This API was pointed to me by Stefano Stabellini as one of the possible
ways to
achieve zero copying and share physically contiguous buffers. It is used
by x86
SWIOTLB code (xen_create_contiguous_region, [5]), but as per my
understanding
this API cannot be used on ARM as of now [6].  Conclusion: not an option
for ARM
at the moment

Comparison for display use-case
===============================

1 Number of grant references used
1-1 grant references: nr_pages
1-2 GNTTABOP_transfer: nr_pages
1-3 XENMEM_exchange: not an option

2 Effect of DomU crash on Dom0 (its mapped pages)
2-1 grant references: pages can be unmapped by Dom0, Dom0 is fully
recovered
2-2 GNTTABOP_transfer: pages will be returned to the Hypervisor, lost
for Dom0
2-3 XENMEM_exchange: not an option

3 Security issues from sharing Dom0 pages to DomU
1-1 grant references: none
1-2 GNTTABOP_transfer: none
1-3 XENMEM_exchange: not an option

At the moment approach 1 with granted references seems to be a winner for
sharing buffers both ways, e.g. Dom0 -> DomU and DomU -> Dom0.

Conclusion
==========

I would like to get some feedback from the community on which approach
is more
suitable for sharing large buffers and to have a clear vision on cons
and pros
of each one: please feel free to add other metrics I missed and correct
the ones
I commented on.  I would appreciate help on comparing approaches 2 and 3
as I
have little knowledge of these APIs (2 seems to be addressed by
Christopher, and
3 seems to be relevant to what Konrad/Stefano do WRT SWIOTLB).

Thank you,

Oleksandr

[1]
https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/grant_table.h;h=018036e825f8f2999812cdb089f7fa2195789231;hb=HEAD#l414

[2] https://xenbits.xen.org/docs/4.9-testing/misc/grant-tables.txt
[3]
https://xenbits.xen.org/hg/linux-2.6.18-xen.hg/file/7d14715efcac/drivers/xen/netfront

[4]
https://xenbits.xen.org/hg/linux-2.6.18-xen.hg/file/7d14715efcac/drivers/xen/netback

[5]
http://elixir.free-electrons.com/linux/latest/source/arch/x86/xen/mmu_pv.c#L2618

[6]
https://lists.xenproject.org/archives/html/xen-devel/2015-12/msg02110.html


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.