[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Xen network domain performance for 10Gb NIC


  • To: tosher 1 <akm2tosher@xxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 27 Apr 2020 09:03:17 +0200
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@xxxxxxxxxx; spf=Pass smtp.mailfrom=roger.pau@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 27 Apr 2020 07:03:52 +0000
  • Ironport-sdr: KKixjztdLUHt7/4mGwYZ2EU00Qdufx4HQ47+20+ml1fDoCHHlUPCXXEq0jMErO1/nz3itTOtCU AIxgpdxXav6fezM2D/eVXBhdTKhNDrNi1EuCZCoMSxd3mxcPLFlXeJsss60LZ9TMk78cqMPpYI doJKdTj9mawzSCfytBrCNvR0mjt6f6l/aGbgUu/sbJsL4MePH9MP8jTmHzJyHVpfB8XV1sRDkT m64F8/ga+gT3OmHkk6OeR2HVlr0ku5zUmYUZE6T9XeuCC7eay/OfqOXCw8WWF2a84AYhLfhqql uOc=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Sun, Apr 26, 2020 at 07:18:33PM +0000, tosher 1 wrote:
>  Hi everyone,
> 
> Lately, I have been experimenting with 10Gb NIC performance on Xen domains. I 
> have found that network performance is very poor for PV networking when a 
> driver domain is used as a network backend.
> 
> My experimental setup is I have two machines connected by the 10Gb network: a 
> server running the Xen hypervisor and a desktop machine working as a client. 
> I have Ubuntu 18.04.3 LTS running on the Dom0, Domus, Driver Domain, and 
> client desktop, where the Xen version is 4.9. I measured the network 
> bandwidth using iPerf3.
> 
> The network bandwidth between a DomU using Dom0 as backend and the client 
> desktop is like 9.39Gbits/sec. However, when I use a network driver domain, 
> which has the 10Gb NIC by PCI pass through, the bandwidth between the DomU 
> and the client desktop is like 2.41Gbit/sec is one direction and 
> 4.48Gbits/sec in another direction. Here, by direction, I mean the 
> client-server direction for iPerf3.
> 
> These results indicate a huge performance degradation, which is unexpected. I 
> am wondering if I am missing any key points here which I should have taken 
> care of or if there is any tweak that I can apply.

Driver domains with passthrough devices need to perform IOMMU
operations in order to add/remove page table entries when doing grant
maps (ie: IOMMU TLB flushes), while dom0 doesn't need to because it
has the whole RAM identity mapped in the IOMMU tables. Depending on
how fast your IOMMU is and what capabilities it has doing such
operations can introduce a significant amount of overhead.

I would give a try to a newer version of Xen also, there have been
some changes in IOMMU management, but I would guess your bottleneck
doesn't come from the code itself, but rather from the IOMMU.

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.