[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/5] VT-d support for PV guests (V2)



I've done a few updates to the VT-d support for PV guests.  It now
also handles grant maps and unmaps properly.  I had also added a
selective IOTLB flush patch, but Xiaowei already posted a similar
patch yesterday.

There has been some concern as to the impact of updating IOMMU tables
upon grant table map/unmap, in particular if the IOTLBs need to be
flushed.

For grant mappings (i.e., promotion) current VT-d IOMMUs seem to all
not cache non-present table entries.  IOTLB flush is therefore not
needed upon adding new mappings.  This capability can easily be
detected and fallback to flushing can be enabled if so required.

Promoting mappings from read-only to read-write is a different story.
Ian says that some VT-d hardware supports it without flushing.
However, this capability is not communicated by the IOMMU hardware.
Always doing flush in these cases may as such prove necessary to avoid
DMA page faults.  However, my gut feeling is that promotion from
read-only to read-write is much rarer than non-present to read-write,
so this might not be such a big performance issue.

For grant unmaps (i.e., demotion) there is definitely a need to flush
the IOTLB to avoid unautorized access.  Some proposals for a more
relaxed flushing model have been proposed.  I will not consider these
right now since they require some level collaboration with the guests.
We will in any case want to support a strict model where an unmap
operation is guaranteed to not have lingering mappings in the IOMMU
after it has completed.

The current patchset unmaps the page from the IOMMU (and flushes it
from the IOTLB) when the last grant mapping for the domain is
unmapped.  A guest can as such prevent the IOMMU unmap if it keeps a
reference to the grant mapping.  Keeping a GNTMAP_device_map is fine.
It doesn't have to be a real mapping.

There are some optimizations possible for flushing the IOTLB entries.
One can for example use the VT-d invalidation queue support (if
present) to asynchronously flush the entries and sync up at the end of
the unmap operation.  A more simpler approach that makes sense if
flushing more then one entry --- and which is actually supported by
the hardware I'm working on --- is to flush all the entries for the
domain at the end of the unmap operation rather than flushing single
entries.

In any case, these are optmizations that can be added later.  There is
no overhead if a guest domain doesn't actually have IOMMU enabled for
it.  (I may get around to performing some 10GbE benchmarks to confirm
this next week.)  Other features to be added in the future is support
for IOMMU read-only mappings.  Currently, host read-only mappings are
reflected as non-present in the IOMMU.  This is of course a bit more
restrictive than necessary.

        eSk


--
 tools/python/xen/lowlevel/xc/xc.c     |   85 +++++++++++++++++++
 tools/python/xen/xend/server/pciif.py |   17 +++
 xen/arch/x86/domain.c                 |    7 +
 xen/arch/x86/domctl.c                 |   24 ++++-
 xen/arch/x86/mm.c                     |   14 +++
 xen/arch/x86/mm/p2m.c                 |   21 ++++
 xen/common/grant_table.c              |   50 ++++++++++-
 xen/common/memory.c                   |   16 +--
 xen/drivers/passthrough/iommu.c       |   65 ++++++++++++++-
 xen/drivers/passthrough/vtd/extern.h  |    3 
 xen/drivers/passthrough/vtd/iommu.c   |   68 +++++++++++----
 xen/drivers/passthrough/vtd/utils.c   |  146 +++++++++++++++-------------------
 xen/include/xen/hvm/iommu.h           |    3 
 xen/include/xen/iommu.h               |    1 
 xen/include/xen/sched.h               |    3 
 15 files changed, 397 insertions(+), 126 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.