[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] Slow network performance between HVM guests
> -----Original Message----- > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx > [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of > Martin Goldstone > Sent: 23 May 2007 16:05 > To: xen-users@xxxxxxxxxxxxxxxxxxx > Subject: [Xen-users] Slow network performance between HVM guests > > Hi all, > > We're running Xen 3.1 on Centos 5 here (x86_64, kernel 2.6.18), and > we're seeing some odd networking issues with our Windows HVM guests. > Perhaps someone here has some ideas they can offer. > > Briefly, the specs of the host are: Dual Xeon 3 GHz (Dual Core + HT), > 4GB RAM, 73 GB HDD. Network is e1000. > > The Windows HVM guests are Windows Server 2003 R2 Standard > x86-64, with > 2 VCPUs, 1GB RAM, 16GB HDD (in a file), rtl8139 network. > > Basically, file transfer speeds between guests (both on the > same bridge) > on the same host are approximately 10% (if not less) of the speed of > file transfers from a HVM guest to another system > (virtualised or not) > away from that host. Any ideas, or is this normal behaviour? Most likely it's because the network access is going through QEMU-DM, which means that Dom0 has to "emulate" the network device. With BOTH devices being emulated on the same Dom0, you get latency added in both ends, and there's less likely to be any "overlap" between them. Do you by any chance also restrict the Dom0 to a single core? Also, are the two guest domains running on the same or different cores? If both domains use the same core, they would obviously "stop each other" from running. > This is > affecting us with Xen 3.03 and 3.0.4.1 as well, so it's not > just a 3.1 > thing. I've disabled iptables on dom0 to see if that makes a > difference > (it doesn't). We've tried the 32 bit version of Windows, and we've > reduced the number of VCPUs to 1, and increased to 4, all without > success. We originally thought it might have something to do with > another issue we were experiencing (ping times reported in > Windows were > very strange, show latency of several thousand ms, and > showing negative > latency times as well) but that issue disappeared after setting the > number of VCPUs to 1 (apparently there is a bug in the Windows > Multiprocessor ACPI HAL (incidentally, does anyone know if this bug > affects anything other than the displayed latency times?)). The negative latency is probably the one reported in the internal Intel bug-tracker here: http://losvmm-bridge.sh.intel.com/bugzilla/show_bug.cgi?id=991 But it's not easy to know, since it's not accessible from outside Intel (or at least not from the AMD network, but I doubt that it's really a "hide this from AMD" attempt, but rather that the link is an internal Intel site). My guess would be that the time is measured using timestamp counting, and it fails because it's taking the TSC from two different processor cores at different times, which leads to varying results. But that's speculation, and not based on any real understanding of why this is. -- Mats > > We haven't had any success tracking this down so far. Any ideas? > > Thanks in advance for any help, > > Martin > > > _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |