[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.



>>> On 10.12.14 at 10:51, <kevin.tian@xxxxxxxxx> wrote:
>>  From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: Wednesday, December 10, 2014 5:17 PM
>> 
>> >>> On 10.12.14 at 09:47, <kevin.tian@xxxxxxxxx> wrote:
>> > two translation paths in assigned case:
>> >
>> > 1. [direct CPU access from VM], with partitioned PCI aperture
>> > resource, every VM can access a portion of PCI aperture directly.
>> >
>> > - CPU page table/EPT: CPU virtual address->PCI aperture
>> > - PCI aperture - bar base = Graphics Memory Address (GMA)
>> > - GPU page table: GMA -> GPA (as programmed by guest)
>> > - IOMMU: GPA -> MPA
>> >
>> > 2. [GPU access through GPU command operands], with GPU scheduling,
>> > every VM's command buffer will be fetched by GPU in a time-shared
>> > manner.
>> >
>> > - GPU page table: GMA->GPA
>> > - IOMMU: GPA->MPA
>> >
>> > In our case, IOMMU is setup with 1:1 identity table for dom0. So
>> > when GPU may access GPAs from different VMs, we can't count on
>> > IOMMU which can only serve one mapping for one device (unless
>> > we have SR-IOV).
>> >
>> > That's why we need shadow GPU page table in dom0, and need a
>> > p2m query call to translate from GPA -> MPA:
>> >
>> > - shadow GPU page table: GMA->MPA
>> > - IOMMU: MPA->MPA (for dom0)
>> 
>> I still can't see why the Dom0 translation has to remain 1:1, i.e.
>> why Xen couldn't return some "arbitrary" GPA for the query in
>> question here, setting up a suitable GPA->MPA translation. (I put
>> arbitrary in quotes because this of course must not conflict with
>> GPAs already or possibly in use by Dom0.) And I can only stress
>> again that you shouldn't leave out PVH (where the IOMMU already
>> isn't set up with all 1:1 mappings) from these considerations.
>> 
> 
> It's interesting that you think IOMMU can be used in such situation.
> 
> what do you mean by "arbitrary" GPA here? and It's not just about 
> conflicting with Dom0's GPA, it's about confliction in all VM's GPAs 
> when you hosting them through one IOMMU page table, and there's 
> no way to prevent this definitely since GPAs are picked by VMs 
> themselves.

As long as for the involved DomU-s the physical address comes in
ways similar to PCI device BARs (which they're capable to deal with),
that's not a problem imo. For Dom0, just like BARs may get assigned
while bringing up PCI devices, a "virtual" BAR could be invented here.

> I don't think we can support PVH here if IOMMU is not 1:1 mapping.

That would make XenGT quite a bit less useful going forward. But
otoh don't you only care about certain MMIO regions to be 1:1
mapped? That's the case for PVH Dom0 too, iirc.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.