[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH 0/5] VT-d support for PV guests



>From: Ian Pratt [mailto:Ian.Pratt@xxxxxxxxxxxxx] 
>Sent: 2008年5月21日 13:05
>> >Is "demotion" of access handled synchronously, or do you have some
>> >tricks to mitigate the synchronization?
>> 
>> All changes need be handled synchronously, as DMA request is not
>> restartable with VT-d fault as async event notification. 
>Hardware bits
>> are designed in such way that all expected permission controls have
>> to exist before device actually issues access request.
>
>You are talking about 'promtion' (adding more permissions). Demotion
>required flushing entries in the TLB and is typically more expensive,
>hence the desire to 'batch' the synchronization.

So you're talking about TLB inside CPU? IMO, both demotion/promotion
requires IOTLB flush. Even for promotion, it's not like instruction access
to trigger fault for Xen to promote lazily and then restart the exection...

'batch' IOTLB flush is good direction, which requires some cooperation
from guest, e.g. as long as guest driver doesn't attempt to use set of
frames in changing. So to me it's more like some change in guest side
to batch grant/m2p request together. Or else Xen itself doesn't know 
when one changed mapping will be used by guest and thus has to
force flush for each change before resuming back to guest

>
>> >kmap-style area of physical address space to cycle the mappings
>> through
>> >to avoid having to do so many synchronous invalidates (at the
>> >expense of
>> >allowing a driver domain to be able to DMA to a page for a
>> >little longer
>> >than it strictly ought to).
>> 
>> Could you elaborate a bit how kmap-style area helps here? The key
>> point is whether frequency of p2m mapping can be reduced...
>
>A window of guest physical address space is created that is used to
>create mappings to granted pages. The next available free slot is used
>when creating a mapping. When the end of the window is reached, flush
>the IOMMU. (Actually, you can do better by issuing a flush at the
>halfway point and then synchronizing against the flush at the 
>end if the
>window, effectively double buffering).
>
>This typically avoids the churn that would happen with grant mappings
>where the same guest physical address region is used to map different
>pages (requiring a synchronous flush).
>

Yep, that's a neat one to hold unflushed guest frames from being used
until batch flush is done later. :-)

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.