[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Poor Windows 2003 + GPLPV performance compared to VMWare



On 15/09/12 00:53, Adam Goryachev wrote:
> On 14/09/12 23:30, Ian Campbell wrote:
>
>>>>> device_model    = '/usr/lib/xen-default/bin/qemu-dm'
>>>>> localtime    = 1
>>>>> name        = "vm1"
>>>>> cpus        = "2,3,4,5"    # Which physical CPU's to allow
>>>> Have you pinned dom0 to use pCPU 1 and/p pCPUs > 6?
>>> No, how should I pin dom0 to cpu0 ?
>> dom0_vcpus_pin as described in
>> http://xenbits.xen.org/docs/4.2-testing/misc/xen-command-line.html
> Thanks, I'll need to reboot the dom0 to apply this, will do as soon as
> this current scheduled task is complete.
OK, I have pinned dom0 to cpu0, and this had no effect on performance.
>> You have:
>>         cpus = "2,3,4,5"
>> which means "let all the guests VCPUs run on any of PCPUS 2-5".
>>
>> It sounds like what you are asking for above is:
>>         cpus = [2,3,4,5]
>> Which forces guest vcpu0=>pcpu=2, 1=>3, 2=>4 and 3=>5.
>>
>> Subtle I agree.
> Ugh... ok, I'll give that a try. BTW, it would seem this is different
> from xen 4.0 (from debian stable) where it seems to magically do what I
> meant to say, or I'm just lucky on those machines :)
Actually, the above syntax doesn't work:
cpus        = [2,3,4,5]    # Which physical CPU's to allow
Error: 'int' object has no attribute 'split'

Once I reverted to:
cpus    = "2,3,4,5"
I can then boot again, but on reboot I get this:
xm vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU
Affinity
Domain-0                             0     0     0   r--     148.9 0
cobweb                               6     0     5   ---       0.5 2-5
cobweb                               6     1     -   --p       0.0 2-5
cobweb                               6     2     -   --p       0.0 2-5
cobweb                               6     3     -   --p       0.0 2-5

So it isn't pinning each vcpu to a specific cpu... but I suppose it
should be smart enough to do it well anyway...
Performance is still at the same level.

>> Do you have a specific reason for pinning? I'd be tempted to just let
>> the scheduler do its thing unless/until you determine that it is causing
>> problems.
> The only reason for pinning is:
> a) To stop the scheduler from moving the vCPU around on the pCPU, from
> my understanding this improves performance
> b) when running multiple DOMU, I either want a bunch of DOMU to share
> one cpu, while I want one or more dedicated CPU other DOMU. (ie, I use
> this as a type of prioritisation/performance tuning.
>
> In this case, there is only a single VM, though if some hardware is lost
> (other physical machines) then will end up with multiple VM's...
>
>>>> How many dom0 vcpus have you configured?
>>> I assume by default it takes all of them...
>> Correct. dom0_max_vcpus will adjust this for you.
> Will adjust on the next reboot....
Done, dom0 is set to 1 cpu, but still makes no difference to performance.
>>>> And as James suggests it would also be useful to benchmark iSCSI running
>>>> in dom0 and perhaps even running on the same system without Xen (just
>>>> Linux) using the same kernel. I'm not sure if VMware offers something
>>>> similar which could be used for comparison.
>>> Well, that is where things start to get complicated rather quickly...
>>> There are a lot of layers here, but I'd prefer to look at the issues
>>> closer to xen first, since vmware was working from an identically
>>> configured san/etc, so nothing at all has changed there. Ultimately, the
>>> san is using 3 x SSD in RAID5. I have done various testing in the past
>>> from plain linux (with older kernel 2.6.32 from debian stable) and
>>> achieved reasonable figures (I don't recall exactly).
>> I was worried about the Linux side rather than the SAN itself, but it
>> sounds like you've got that covered.
> At this stage, the limiting performance should be the single gig
> ethernet for the physical machine to connect to the network. (The san
> side has 4 x gig ethernet).
>
> This is a live network/system, but it has been a work in progress for
> the past 12 months...
>
> I'll update further once I can get some testing and answers... Will do a
> test with only changing the shadow_memory, and then if no big
> improvement, will reboot with the changes to the dom0 cpus etc, and test
> again.
>

I'm really at a bit of a loss on where to go from here.... The standard
performance improvements don't seem to make any difference at all, and
I'm running out of ideas....

Could you suggest a "standard" tool which would allow me to test disk IO
performance (this is my initial suspicion for slow performance), and
also CPU performance (I'm starting to suspect this too now) in both
windows (domU), linux (domU I can create one for testing) and Linux
(dom0). Then I can see where performance is lost (CPU/disk) and at what
layer (dom0/domU) etc...

Thanks,
Adam

-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.