[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Poor Windows 2003 + GPLPV performance compared to VMWare



On 14/09/12 23:30, Ian Campbell wrote:
> http://xenbits.xen.org/docs/4.2-testing/ has man pages for the config
> files. These are also installed on the host as part of the build.
> 
> If you are using xend then the xm ones are a bit lacking. However xl is
> mostly compatible with xm so the xl manpages largely apply. There's also
> a bunch of stuff on http://wiki.xen.org/wiki.

Thanks for the pointer, I'm using 4.1 though, but I guess most of it
will still be the same.

>>>> device_model    = '/usr/lib/xen-default/bin/qemu-dm'
>>>> localtime    = 1
>>>> name        = "vm1"
>>>> cpus        = "2,3,4,5"    # Which physical CPU's to allow
>>>
>>> Have you pinned dom0 to use pCPU 1 and/p pCPUs > 6?
>>
>> No, how should I pin dom0 to cpu0 ?
> 
> dom0_vcpus_pin as described in
> http://xenbits.xen.org/docs/4.2-testing/misc/xen-command-line.html

Thanks, I'll need to reboot the dom0 to apply this, will do as soon as
this current scheduled task is complete.

> You have:
>         cpus = "2,3,4,5"
> which means "let all the guests VCPUs run on any of PCPUS 2-5".
> 
> It sounds like what you are asking for above is:
>         cpus = [2,3,4,5]
> Which forces guest vcpu0=>pcpu=2, 1=>3, 2=>4 and 3=>5.
> 
> Subtle I agree.

Ugh... ok, I'll give that a try. BTW, it would seem this is different
from xen 4.0 (from debian stable) where it seems to magically do what I
meant to say, or I'm just lucky on those machines :)

> Do you have a specific reason for pinning? I'd be tempted to just let
> the scheduler do its thing unless/until you determine that it is causing
> problems.

The only reason for pinning is:
a) To stop the scheduler from moving the vCPU around on the pCPU, from
my understanding this improves performance
b) when running multiple DOMU, I either want a bunch of DOMU to share
one cpu, while I want one or more dedicated CPU other DOMU. (ie, I use
this as a type of prioritisation/performance tuning.

In this case, there is only a single VM, though if some hardware is lost
(other physical machines) then will end up with multiple VM's...

>>> How many dom0 vcpus have you configured?
>>
>> I assume by default it takes all of them...
> 
> Correct. dom0_max_vcpus will adjust this for you.

Will adjust on the next reboot....

>>> And as James suggests it would also be useful to benchmark iSCSI running
>>> in dom0 and perhaps even running on the same system without Xen (just
>>> Linux) using the same kernel. I'm not sure if VMware offers something
>>> similar which could be used for comparison.
>>
>> Well, that is where things start to get complicated rather quickly...
>> There are a lot of layers here, but I'd prefer to look at the issues
>> closer to xen first, since vmware was working from an identically
>> configured san/etc, so nothing at all has changed there. Ultimately, the
>> san is using 3 x SSD in RAID5. I have done various testing in the past
>> from plain linux (with older kernel 2.6.32 from debian stable) and
>> achieved reasonable figures (I don't recall exactly).
> 
> I was worried about the Linux side rather than the SAN itself, but it
> sounds like you've got that covered.

At this stage, the limiting performance should be the single gig
ethernet for the physical machine to connect to the network. (The san
side has 4 x gig ethernet).

This is a live network/system, but it has been a work in progress for
the past 12 months...

I'll update further once I can get some testing and answers... Will do a
test with only changing the shadow_memory, and then if no big
improvement, will reboot with the changes to the dom0 cpus etc, and test
again.

Thank you for your advice.

Regards,
adam

-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.