[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v2 0/4] xen/arm: introduce GNTTABOP_cache_flush

Hi all,
this patch series introduces a new hypercall to perform cache
maintenance operations on behalf of the guest. It is useful for dom0 to
be able to cache flush pages involved in a dma operation with
non-coherent devices.

It also removes XENFEAT_grant_map_identity as the feature is no longer
necessary: it was used to achieve the same goal but the guest can now
use the hypercall instead. Keeping the flag would also have a
significant performance impact as a new p2m mapping gets created and
then destroyed for every grant that is mapped and unmapped in dom0.

Changes in v2:
- make grant_map_exists static;
- remove the spin_lock in grant_map_exists;
- move the hypercall to GNTTABOP;
- do not check for mfn_to_page errors in GNTTABOP_cache_flush;
- take a reference to the page in GNTTABOP_cache_flush;
- replace printk with gdprintk in GNTTABOP_cache_flush;
- split long line in GNTTABOP_cache_flush;
- remove out label in GNTTABOP_cache_flush;
- move rcu_lock_current_domain down before the loop in
- take a spin_lock before calling grant_map_exists in

Stefano Stabellini (4):
      xen/arm: introduce invalidate_xen_dcache_va_range
      xen: introduce grant_map_exists
      xen/arm: introduce GNTTABOP_cache_flush
      Revert "xen/arm: introduce XENFEAT_grant_map_identity"

 xen/common/grant_table.c           |  133 +++++++++++++++++++++++++++++-------
 xen/common/kernel.c                |    2 -
 xen/drivers/passthrough/arm/smmu.c |   33 +++++++++
 xen/include/asm-arm/arm32/page.h   |    3 +
 xen/include/asm-arm/arm64/page.h   |    3 +
 xen/include/asm-arm/grant_table.h  |    3 +-
 xen/include/asm-arm/page.h         |   30 ++++++++
 xen/include/public/features.h      |    4 +-
 xen/include/public/grant_table.h   |   19 ++++++
 9 files changed, 201 insertions(+), 29 deletions(-)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.