[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



On 2/4/2016 1:50 AM, George Dunlap wrote:
On Wed, Feb 3, 2016 at 3:10 PM, Paul Durrant <Paul.Durrant@xxxxxxxxxx> wrote:
  * Is it possible for libxl to somehow tell from the rest of the
    configuration that this larger limit should be applied ?


If a XenGT-enabled VM is provisioned through libxl then some larger limit is 
likely to be required. One of the issues is that it is impossible (or at least 
difficult) to know how many GTTs are going to need to be shadowed.

By GTT, you mean the GPU pagetables I assume?  So you're talking about

Yes, GTT is "Graphics Translation Table" for short.

how large this value should be made, not whether the
heuristically-chosen larger value should be used.  libxl should be
able to tell that XenGT is enabled, I assume, so it should be able to
automatically bump this to 8k if necessary, right?


Yes.

But I think you'd still need a parameter that you could tweak if it
turned out that 8k wasn't enough for a particular workload, right?


Well, not exactly. For XenGT, the latest suggestion is that even when 8K
is not enough, we will not extend this limit anymore. But when
introducing this parameter, I had thought it might also be helpful for
other device virtualization cases which would like to use the mediated
passthrough idea.

  * If we are talking about mmio ranges for ioreq servers, why do
    guests which do not use this feature have the ability to create
    them at all ?

It's not the guest that directly creates the ranges, it's the emulator. 
Normally device emulation would require a relatively small number of MMIO 
ranges and a total number that cannot be influenced by the guest itself. In 
this case though, as I said above, the number *can* be influenced by the guest 
(although it is still the emulator which actually causes the ranges to be 
created).

Just to make this clear: The guest chooses how many gpfns are used in
the GPU pagetables; for each additional gpfn in the guest pagetable,
qemu / xen have the option of either marking it to be emulated (at the
moment, by marking it as a one-page "MMIO region") or crashing the
guest.


Well, kind of. The backend device model in dom0(not qemu) makes the decision whether or not this page is to be emulated.

(A background problem I have is that this thread is full of product
name jargon and assumes a lot of background knowledge of the
implementation of these features - background knowledge which I lack
and which isn't in these patches.  If someone could point me at a
quick summary of what `GVT-g' and `GVT-d' are that might help.)


GVT-d is a name applied to PCI passthrough of an Intel GPU. GVT-g is a name 
applied to Intel GPU virtualization, which makes use of an emulator to mediate 
guest access to the real GPU so that it is possible to share the resources 
securely.

And GTT are the GPU equivalent of page tables?

Yes.

Here let me try to give some brief introduction to the jargons:
* Intel GVT-d: an intel graphic virtualization solution, which dedicates
one physical GPU to a guest exclusively.

* Intel GVT-g: an intel graphic virtualization solution, with mediated
pass-through support. One physical GPU can be shared by multiple guests.
GPU performance-critical resources are partitioned by and passed
through to different vGPUs. Other GPU resources are trapped and
emulated by the device model.

* XenGT: Intel GVT-g code name for Xen.
Here this patch series are features required by XenGT.

* vGPU: virtual GPU presented to guests.

* GTT: abbreviation for graphics translation table, a page table
structure which translates the graphic memory address to a physical
one. For vGPU, PTEs in its GTT are GPFNs, thus raise a demand for
the device model to construct a group of shadow GPU page tables.

Thanks
Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.