[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen and networking.



tmac schrieb:
> NetApp GX with two heads and 10GigE's.
> Measured at over 2Gigabytes/sec!
> Should easily handle 200MBytes/sec
> 
> Network path:
> 
> VirtHostA -GigE-> 4948-10G (port 1 )-10gigE-> 6509 -> -10GigE-> NetApp
> VirtHostB -GigE-> 4948-10G (port 17)-10gigE-> 6509 -> -10GigE-> NetApp

Ok, I see you're preferring *real* equipment ;)

I don't think the network stack of the domU's has a problem, as one
dd works. As I'm pretty sure, you always checked each domU is connected
to its own bridge, I would take a look at the domU config.

Have you tried to pin the VCPU's to dedicated cores? For a quick test,
I would reduce the domU's to three cores and keep 2 in dom0.
e.g.
domUa.cfg
        cpus = '1-3'
        vcpus = '3'
        vif [ 'bridge=xenbr0,mac= ...' ]

domUb.cfg
        cpus = '5-7'
        vcpus = '3'
        vif [ 'bridge=xenbr1,mac= ...' ]

xend-config.sxp
        (dom0-cpus 2)


or, temporarily, use xm vcpu-pin / xm vcpu-set

I found similar (MP dualcore and MP quadcore Xeon) systems performing
much better if the domU's are using only cores located at the same cpu.
Without deeper knowlegde about this, I assume this has to do with a
better use of caches.

Regards

Stephan






> On Dec 28, 2007 7:30 PM, Stephan Seitz <s.seitz@xxxxxxxxxxxx> wrote:
>> Don't get me wrong,
>>
>> but my first thought was: What is the maximum expected throughput of the
>> nfs server? It should at least be connected with 2 GBit/s to the switch,
>> to serve two dd's with each ~100MB/s.
>>
>> Well, I assume both domU's are using the same nfs server.
>>
>>
>> Regards,
>>
>> Stephan
>>
>>
>> tmac schrieb:
>>
>>> I have a beefy machine
>>> (Intel dual-quad core, 16GB memory 2 x GigE)
>>>
>>> I have loaded RHEL5.1-xen on the hardware and have created two logical 
>>> systems:
>>> 4 cpus, 7.5 GB memory 1 x Gige
>>>
>>> Following RHEL guidelines, I have it set up so that eth0->xenbr0 and
>>> eth1->xenbr1
>>> Each of the two RHEL5.1 guests uses one of the interfaces and this is
>>> verified at the
>>> switch by seeing the unique MAC addresses.
>>>
>>> If I do a crude test from one guest over nfs,
>>> dd if=/dev/zero of=/nfs/test bs=32768 count=32768
>>>
>>> This yields almost always 95-100MB/sec
>>>
>>> When I run two simultaneously, I cannot seem to get above 25MB/sec from 
>>> each.
>>> It starts off with a large burst like each can do 100MB/sec, but then
>>> in a couple
>>> of seconds, tapers off to the 15-40MB/sec until the dd finishes.
>>>
>>> Things I have tried (installed on the host and the guests)
>>>
>>>  net.core.rmem_max = 16777216
>>>  net.core.wmem_max = 16777216
>>>  net.ipv4.tcp_rmem = 4096 87380 16777216
>>>  net.ipv4.tcp_wmem = 4096 65536 16777216
>>>
>>>  net.ipv4.tcp_no_metrics_save = 1
>>>  net.ipv4.tcp_moderate_rcvbuf = 1
>>>  # recommended to increase this for 1000 BT or higher
>>>  net.core.netdev_max_backlog = 2500
>>>  sysctl -w net.ipv4.tcp_congestion_control=cubic
>>>
>>> Any ideas?
>>>
>>>
>>
>> --
>> Stephan Seitz
>> Senior System Administrator
>>
>> *netz-haut* e.K.
>> multimediale kommunikation
>>
>> zweierweg 22
>> 97074 würzburg
>>
>> fon: +49 931 2876247
>> fax: +49 931 2876248
>>
>> web: www.netz-haut.de <http://www.netz-haut.de/>
>>
>> registriergericht: amtsgericht würzburg, hra 5054
>>
> 
> 
> 


-- 
Stephan Seitz
Senior System Administrator

*netz-haut* e.K.
multimediale kommunikation

zweierweg 22
97074 würzburg

fon: +49 931 2876247
fax: +49 931 2876248

web: www.netz-haut.de <http://www.netz-haut.de/>

registriergericht: amtsgericht würzburg, hra 5054

Attachment: s_seitz.vcf
Description: Vcard

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.