[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/7] x86: add iommu_ops to map and unmap pages, and also to flush the IOTLB



> From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> Sent: Tuesday, February 13, 2018 5:56 PM
> 
> > -----Original Message-----
> > From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> > Sent: 13 February 2018 06:56
> > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> > <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> > Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> > <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Jan Beulich
> > <jbeulich@xxxxxxxx>
> > Subject: RE: [Xen-devel] [PATCH 7/7] x86: add iommu_ops to map and
> > unmap pages, and also to flush the IOTLB
> >
> > > From: Paul Durrant
> > > Sent: Monday, February 12, 2018 6:47 PM
> > >
> > > This patch adds iommu_ops to allow a domain with control_iommu
> > > privilege
> > > to map and unmap pages from any guest over which it has mapping
> > > privilege
> > > in the IOMMU.
> > > These operations implicitly disable IOTLB flushing so that the caller can
> > > batch operations and then explicitly flush the IOTLB using the
> iommu_op
> > > also added by this patch.
> >
> > given that last discussion is 2yrs ago and you said actual implementation
> > already biased from original spec, it'd be difficult to judge whether
> current
> > change is sufficient or just 1st step. Could you summarize what have
> > been changed from last spec, and also any further tasks in your TODO list?
> 
> Kevin,
> 
> The main changes are:
> 
> - there is no op to query mapping capability... instead the hypercall will 
> fail
> with -EACCES
> - there is no longer an option to avoid reference counting map and unmap
> operations
> - there are no longer separate ops for mapping local and remote pages
> (DOMID_SELF should be passed to the map op for local pages), and ops
> always deal with GFNs not MFNs
>   - also I have dropped the idea of a global m2b map, so...
>   - it is now going to be the responsibility of the code running in the
> mapping domain to track what it has mapped [1]
> - there is no illusion that pages other 4k are supported at the moment
> - the flush operation is now explicit
> 
> [1] this would be an issue if the interface becomes usable for anything
> other than dom0 as we'd also need something in Xen to release the page
> refs if the domain was forcibly destroyed, but I think the m2b was the
> wrong solution since it necessitates a full scan of *host* RAM on any
> domain destruction
> 
> The main item on my TODO list is to implement a new IOREQ to allow
> invalidation of specific guest pages. Think of the current 'invalidate map
> cache' as a global flush... I need a specific flush so that a
> decrease_reservation hypercall issued by a guest can instead tell emulators
> exactly which pages are being removed from guest. It is then the emulators'
> responsibilities to unmap those pages if they had them mapped (either
> through MMU or IOMMU) which then drop page refs and actually allow the
> pages to be recycled.
> 
> I will, of course, need to come up with more Linux code to test all this,
> which will eventually lead to kernel and user APIs to allow emulators
> running in dom0 to IOMMU map guest pages.

Thanks for elaboration. I didn't find original proposal. Can you 
attach or point me to a link?

> 
> >
> > at least just map/unmap operations definitely not meet XenGT
> > requirement...
> >
> 
> What aspect of the hypercall interface does not meet XenGT's
> requirements? It would be good to know now then I can make any
> necessary adjustments in v2.
> 

XenGT needs to replace GFN with BFN into shadow GPU page table
for a given domain. Previously iirc there is a query interface for such 
purpose, since the mapping is managed by hypervisor. Based on above 
description (e.g. m2b), did you intend to let Dom0 pvIOMMU driver 
manage all related mapping information thus GVT-g just consults 
pvIOMMU driver for such purpose?

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.