[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] [PATCH 0/5] VT-d support for PV guests
> > Not that I particularly think it matters, but does the patch > > configure the IOMMU to distinguish between read-only and read-write > > access to a guest's own memory or granted memory? If not, we should > > at least clearly document that we're being a little more permissive. > > The idea was indeed to distinguish between this properly. However, > the current VT-d code only handles read-write or no-access. For PV > guests I've made it so that page tables and such are mapped with no > access in the IOMMU. This is a bit more restrictive than necessary, > but it shouldn't really matter for the common usage scenarios. > > Anyhow, read-only access can indeed be supported for VT-d. I just > wanted to get basic PV guest support in there first. Also, I'm not > familiar with AMD's IOMMU, but I would guess that it also supports > read-only access. OK, hopefully someone can fill in the missing bits in the VTd support. > > Have you got any with-and-without performance results with a decent > > high-throughput device (e.g. a HBA or 10Gb/s NIC)? > > I don't have a 10GbE NIC in my VT-d enabled machine right now, so I > can't test it. We have however tried with a 10GbE NIC running in dom0 > with VT-d enabled, and there was as far as I rememeber no performance > hit. Of course, any performance degradation will largely depend on > the networking memory footprint and the size of the IOTLB. Indeed, that's why I'd like to see some measurements, both for dom0 IO, and also when dom0 is doing IO on behalf of another domain. I'm also interested to understand what the overhead to page type change / balloon operations are. Do you synchronously invalidate the entries in the IOMMU? How slow is that? > > It would be good if you could provide a bit more detail on when the > > patch populates IOMMU entries, and how it keeps them in sync. For > > example, does the IOMMU map all the guest's memory, or just that > > which will soon be the subject of a DMA? How synchronous is the > > patch in removing mappings, e.g. due to page type changes (pagetable > > pages, balloon driver) or due to unmapping grants? > > All writable memory is initially mapped in the IOMMU. Page type > changes are also reflected there. In general all maps and unmaps to a > domain are synced with the IOMMU. According to the feedback I got I > apparently missed some places, though. Will look into this and fix > it. Is "demotion" of access handled synchronously, or do you have some tricks to mitigate the synchronization? > It's clear that performance will pretty much suck if you do frequent > updates in grant tables, but the whole idea of having passthrough > access for NICs is to avoid this netfront/netback data plane scheme > altogether. This leaves you with grant table updates for block device > access. I don't know what the expected update frequency is for that > one. I don't entirely buy this -- I think we need to make grant map/unmaps fast too. We've discussed schemes to make this more efficient by doing the IOMMU operations at grant map time (where they can be easily batched) rather than at dma_map time. We've talked about using a kmap-style area of physical address space to cycle the mappings through to avoid having to do so many synchronous invalidates (at the expense of allowing a driver domain to be able to DMA to a page for a little longer than it strictly ought to). Ian _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |