[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Discussion about virtual iommu support for Xen guest



On 26/05/16 09:29, Lan Tianyu wrote:
> Hi All:
> We try pushing virtual iommu support for Xen guest and there are some
> features blocked by it.
>
> Motivation:
> -----------------------
> 1) Add SVM(Shared Virtual Memory) support for Xen guest
> To support iGFX pass-through for SVM enabled devices, it requires
> virtual iommu support to emulate related registers and intercept/handle
> guest SVM configure in the VMM.
>
> 2) Increase max vcpu support for one VM.
>
> So far, max vcpu for Xen hvm guest is 128. For HPC(High Performance
> Computing) cloud computing, it requires more vcpus support in a single
> VM. The usage model is to create just one VM on a machine with the
> same number vcpus as logical cpus on the host and pin vcpu on each
> logical cpu in order to get good compute performance.
>
> Intel Xeon phi KNL(Knights Landing) is dedicated to HPC market and
> supports 288 logical cpus. So we hope VM can support 288 vcpu
> to meet HPC requirement.
>
> Current Linux kernel requires IR(interrupt remapping) when MAX APIC
> ID is > 255 because interrupt only can be delivered among 0~255 cpus
> without IR. IR in VM relies on the virtual iommu support.
>
> KVM Virtual iommu support status
> ------------------------
> Current, Qemu has a basic virtual iommu to do address translation for
> virtual device and it only works for the Q35 machine type. KVM reuses it
> and Redhat is adding IR to support more than 255 vcpus.
>
> How to add virtual iommu for Xen?
> -------------------------
> First idea came to my mind is to reuse Qemu virtual iommu but Xen didn't
> support Q35 so far. Enabling Q35 for Xen seems not a short term task.
> Anthony did some related jobs before.
>
> I'd like to see your comments about how to implement virtual iommu for Xen.
>
> 1) Reuse Qemu virtual iommu or write a separate one for Xen?
> 2) Enable Q35 for Xen to reuse Qemu virtual iommu?
>
> Your comments are very appreciated. Thanks a lot.

To be viable going forwards, any solution must work with PVH/HVMLite as
much as HVM.  This alone negates qemu as a viable option.

From a design point of view, having Xen needing to delegate to qemu to
inject an interrupt into a guest seems backwards.


A whole lot of this would be easier to reason about if/when we get a
basic root port implementation in Xen, which is necessary for HVMLite,
and which will make the interaction with qemu rather more clean.  It is
probably worth coordinating work in this area.


As for the individual issue of 288vcpu support, there are already issues
with 64vcpu guests at the moment.  While it is certainly fine to remove
the hard limit at 255 vcpus, there is a lot of other work required to
even get 128vcpu guests stable.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.