[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Poor Windows 2003 + GPLPV performance compared to VMWare



On Fri, 2012-09-14 at 15:53 +0100, Adam Goryachev wrote:
> On 14/09/12 23:30, Ian Campbell wrote:
> > http://xenbits.xen.org/docs/4.2-testing/ has man pages for the config
> > files. These are also installed on the host as part of the build.
> > 
> > If you are using xend then the xm ones are a bit lacking. However xl is
> > mostly compatible with xm so the xl manpages largely apply. There's also
> > a bunch of stuff on http://wiki.xen.org/wiki.
> 
> Thanks for the pointer, I'm using 4.1 though, but I guess most of it
> will still be the same.

Right.

> 
> > You have:
> >         cpus = "2,3,4,5"
> > which means "let all the guests VCPUs run on any of PCPUS 2-5".
> > 
> > It sounds like what you are asking for above is:
> >         cpus = [2,3,4,5]
> > Which forces guest vcpu0=>pcpu=2, 1=>3, 2=>4 and 3=>5.
> > 
> > Subtle I agree.
> 
> Ugh... ok, I'll give that a try. BTW, it would seem this is different
> from xen 4.0 (from debian stable) where it seems to magically do what I
> meant to say, or I'm just lucky on those machines :)

It's not impossible, xend is largely unmaintained but it does get
occasional "obvious" fixes (which sometimes turn out not to be so
obvious)

> > Do you have a specific reason for pinning? I'd be tempted to just let
> > the scheduler do its thing unless/until you determine that it is causing
> > problems.
> 
> The only reason for pinning is:
> a) To stop the scheduler from moving the vCPU around on the pCPU, from
> my understanding this improves performance

It can, it can also cause the opposite if not used carefully.

I'm no expert on scheduling vs. pinning but one thing to watch for in
particular is the relationship between dom0 and guest VCPUs when pinning
one or both of them. Depending on the workload either putting them on
the same or distinct sets of pCPUs can be beneficial.

I've also heard that mixing pinned and unpinned VCPUs on a pCPU can
cause unexpected behaviours.

> b) when running multiple DOMU, I either want a bunch of DOMU to share
> one cpu, while I want one or more dedicated CPU other DOMU. (ie, I use
> this as a type of prioritisation/performance tuning.

You might find cpupools in 4.1+ quite handy for managing this.

> In this case, there is only a single VM, though if some hardware is lost
> (other physical machines) then will end up with multiple VM's...

Don't forget that dom0 counts as a VM as well.

Ian.



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.