[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Discussion about virtual iommu support for Xen guest

Hi All:
We try pushing virtual iommu support for Xen guest and there are some
features blocked by it.

1) Add SVM(Shared Virtual Memory) support for Xen guest
To support iGFX pass-through for SVM enabled devices, it requires
virtual iommu support to emulate related registers and intercept/handle
guest SVM configure in the VMM.

2) Increase max vcpu support for one VM.

So far, max vcpu for Xen hvm guest is 128. For HPC(High Performance
Computing) cloud computing, it requires more vcpus support in a single
VM. The usage model is to create just one VM on a machine with the
same number vcpus as logical cpus on the host and pin vcpu on each
logical cpu in order to get good compute performance.

Intel Xeon phi KNL(Knights Landing) is dedicated to HPC market and
supports 288 logical cpus. So we hope VM can support 288 vcpu
to meet HPC requirement.

Current Linux kernel requires IR(interrupt remapping) when MAX APIC
ID is > 255 because interrupt only can be delivered among 0~255 cpus
without IR. IR in VM relies on the virtual iommu support.

KVM Virtual iommu support status
Current, Qemu has a basic virtual iommu to do address translation for
virtual device and it only works for the Q35 machine type. KVM reuses it
and Redhat is adding IR to support more than 255 vcpus.

How to add virtual iommu for Xen?
First idea came to my mind is to reuse Qemu virtual iommu but Xen didn't
support Q35 so far. Enabling Q35 for Xen seems not a short term task.
Anthony did some related jobs before.

I'd like to see your comments about how to implement virtual iommu for Xen.

1) Reuse Qemu virtual iommu or write a separate one for Xen?
2) Enable Q35 for Xen to reuse Qemu virtual iommu?

Your comments are very appreciated. Thanks a lot.
Best regards
Tianyu Lan

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.