[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen virtual IOMMU high level design doc





On 11/24/2016 9:37 PM, Edgar E. Iglesias wrote:
On Thu, Nov 24, 2016 at 02:49:41PM +0800, Lan Tianyu wrote:
On 2016年11月24日 12:09, Edgar E. Iglesias wrote:
Hi,

I have a few questions.

If I understand correctly, you'll be emulating an Intel IOMMU in Xen.
So guests will essentially create intel iommu style page-tables.

If we were to use this on Xen/ARM, we would likely be modelling an ARM
SMMU as a vIOMMU. Since Xen on ARM does not use QEMU for emulation, the
hypervisor OPs for QEMUs xen dummy IOMMU queries would not really be used.
Do I understand this correctly?

I think they could be called from the toolstack. This is why I was
saying in the other thread that the hypercalls should be general enough
that QEMU is not the only caller.

For PVH and ARM guests, the toolstack should be able to setup the vIOMMU
on behalf of the guest without QEMU intervention.
OK, I see. Or, I think I understand, not sure :-)

In QEMU when someone changes mappings in an IOMMU there will be a notifier
to tell caches upstream that mappings have changed. I think we will need to
prepare for that. I.e when TCG CPUs sit behind an IOMMU.

For Xen side, we may notify pIOMMU driver about mapping change via
calling pIOMMU driver's API in vIOMMU.

I was refering to the other way around. When a guest modifies the mappings
for a vIOMMU, the driver domain with QEMU and vDevices needs to be notified.

I couldn't find any mention of this in the document...

Qemu side won't have iotlb cache and all DMA translation info are in the hypervisor. All vDevice's DMA requests are passed to hypervisor, hypervisor returns back translated address and then Qemu finish the DMA operation finally.

There is a race condition between iotlb invalidation operation and vDevices' in-fly DMA. We proposed a solution in "3.2 l2 translation - For virtual PCI device". We hope to take advantage of current ioreq mechanism to achieve something like notifier.

Both vIOMMU in hypervisor and dummy vIOMMU in Qemu register the same MMIO region. When there is a invalidation MMIO access and hypervisor want to notify Qemu, vIOMMU's MMIO handler returns X86EMUL_UNHANDLEABLE and io emulation handler is supposed to send IO request to Qemu. Dummy vIOMMU in Qemu receives the event and start to drain in-fly DMA operation.






Another area that may need change is that on ARM we need the map-query to return
the memory attributes for the given mapping. Today QEMU or any emulator
doesn't use it much but in the future things may change.

What about the mem attributes?
It's very likely we'll add support for memory attributes for IOMMU's in QEMU
at some point.
Emulated IOMMU's will thus have the ability to modify attributes (i.e 
SourceID's,
cacheability, etc). Perhaps we could allocate or reserve an uint64_t
for attributes TBD later in the query struct.

Sounds like you hope to extend capability variable in the query struct to uint64_t to support more future feature, right?

I have added "permission" variable in struct l2_translation to return vIOMMU's memory access permission for vDevice's DMA request. No sure it can meet your requirement.





For SVM, whe will also need to deal with page-table faults by the IOMMU.
So I think there will need to be a channel from Xen to Guesrt to report these.

Yes, vIOMMU should forward the page-fault event to guest. For VTD side,
we will trigger VTD's interrupt to notify guest about the event.

OK, Cool.

Perhaps you should document how this (and the map/unmap notifiers) will work?

This is VTD specific to deal with some fault events and just like some other virtual device models emulate its interrupt. So I didn't put this in this design document.

For mapping change, please see the fist comments.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.