[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] VT-d scalability issue



On Tue, Sep 09, 2008 at 10:28:59AM +0100, Ian Pratt wrote:
> > When I assign a pass-through NIC to a linux VM and increase the num of
> VMs, the
> > iperf throughput for each VM drops greatly. Say, start 8 VM running on
> > a machine with 8 physical cpus, start 8 iperf client to connect each
> of them, the
> > final result is only 60% of 1 VM.
> > 
> > Further investigation shows vcpu migration cause "cold" cache for
> pass-
> > through domain.
> 
> Just so I understand the experiment, does each VM have a pass-through
> NIC, or just one?


Each VM have a pass-through device.

> 
> > following code in vmx_do_resume try to invalidate orig processor's
> > cache when
> > 14 migration if this domain has pass-through device and no support for
> > wbinvd vmexit.
> > 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting )
> > {
> >     int cpu = v->arch.hvm_vmx.active_cpu;
> >     if ( cpu != -1 )
> >         on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1,
> > 
> > }
> > 
> > So we want to pin vcpu to free processor for domains with pass-through
> > device in creation process, just like what we did for NUMA system.
> 
> What pinning functionality would we need beyond what's already there?

I think you mean the "cpus" in config file for vcpu affinity.
It requires extra efforts from end user. We just want to pin vcpu for VTd 
domain 
automatically in xend, like we pin vcpu to a free node in NUMA system.

> 
> Thanks,
> Ian
> 
>  
> > What do you think of it? Or have other ideas?
> > 
> > Thanks,
> > 
> > 
> > --
> > best rgds,
> > edwin
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 

-- 
best rgds,
edwin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.