[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Discussion about virtual iommu support for Xen guest



On 5/27/2016 4:19 PM, Lan Tianyu wrote:
On 2016年05月26日 19:35, Andrew Cooper wrote:
On 26/05/16 09:29, Lan Tianyu wrote:

To be viable going forwards, any solution must work with PVH/HVMLite as
much as HVM.  This alone negates qemu as a viable option.

From a design point of view, having Xen needing to delegate to qemu to
inject an interrupt into a guest seems backwards.


Sorry, I am not familiar with HVMlite. HVMlite doesn't use Qemu and
the qemu virtual iommu can't work for it. We have to rewrite virtual
iommu in the Xen, right?


A whole lot of this would be easier to reason about if/when we get a
basic root port implementation in Xen, which is necessary for HVMLite,
and which will make the interaction with qemu rather more clean.  It is
probably worth coordinating work in this area.

The virtual iommu also should be under basic root port in Xen, right?


As for the individual issue of 288vcpu support, there are already issues
with 64vcpu guests at the moment. While it is certainly fine to remove
the hard limit at 255 vcpus, there is a lot of other work required to
even get 128vcpu guests stable.


Could you give some points to these issues? We are enabling more vcpus
support and it can boot up 255 vcpus without IR support basically. It's
very helpful to learn about known issues.

We will also add more tests for 128 vcpus into our regular test to find
related bugs. Increasing max vcpu to 255 should be a good start.

Hi Andrew:
Could you give more inputs about issues with 64 vcpus and what needs to
be done to make 128vcpu guest stable? We hope to do somethings to
improve them.

What's progress of PCI host bridge in Xen? From your opinion, we should
do that first, right? Thanks.








~Andrew




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.