[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Mon, May 05, 2008 at 08:32:46PM -0400, jim burns wrote: > - dom0 vs. domu: obviously, the standard to match is dom0 performance. (I > suspect, tho', that non-xen kernel performance would be even better.) Looking > at the 4k pattern numbers above, hvm severely lags dom0. Interestingly > enough, for the 32k pattern, hvm is doing better than dom0. > domU doing better than dom0 usually happens when you use file backed disks on dom0.. then the memory cache of dom0 will affect the domU results. > > Could you try iometer on dom0 to see what kind of performance you get > > there.. or on linux pv domU? > > As you can see above, I did do dom0. I could do a linux pv, but your next > idea > interests me more. > OK. I think measuring pv domU is worth trying too :) > > And one more thing.. was your XP HVM single vcpu or more? Did you try > > binding both dom0 and hvm domU to their own dedicated cpu cores? > > It was vcpu=2. > I think you should re-test with vcpu=1. Configure dom0 for 1 vcpu and domU for 1 vcpu and pin the domains to have a dedicated core. This way you're not sharing any pcpu's between the domains. I think this is the "recommended" setup from xen developers for getting maximum performance. I think the performance will be worse when you have more vcpus in use than your actual pcpu count.. > > Yeeaaahh - everything tanked! MB/s down, Cpu % up, etc. Console was still a > little sluggish. (I suppose pinning cpus might work better with more than one > socket on the mobo.) I won't be trying that config again ;-) > Hmm.. interesting. Maybe it was because of the shared pcpu's.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |