[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] CPU scheduling and allocated all VCPU.



Provisioning each domU with a count of VCPUs equal to the PCPUs is not a good idea, especially for database type workloads. The problem is that Xen does not allow domUs any way to tell whether the resources presented to them are being utilized by anyone else - as such, when two domUs are under load, they will each attempt to use all 32 VCPUs they see available to them. Each domU, even those experiencing minimal load of their own, will then have to fight for processor time and the Xen scheduler will also be attempting to handle all the requests while fighting for its own processor time. Suffice to say, this is suboptimal.

If you're really desperate to squeeze all the possible performance out of this host, the easiest way to do it is to skip Xen and just run everyone on shared bare metal - each process will see the full 32 cores available, and there won't be any hypervisor overhead. However, I assume you're using Xen for good reason, so my suggestion would be to set up some system on the dom0 to dynamically allocate VCPUs to domUs - this allows you to 'burst' performance such that a single domU could have the lion's share of the host's CPU cycles, while avoiding a scenario where every host is attempting to use every cycle the machine offers. It's not an out-of-the-box solution and how well it would play with oracle is anyone's guess, but it's definitely better than hoping that no two domUs try and make use of the resources they're given at the same time.

Also, as a fairly irrelevant aside if 16 of those PCPUs are from SMT (hyper-threading) you may want to test to see how well you perform with SMT disabled, as there's substantial anecdotal reports of it decreasing performance.

On Tue, Jul 1, 2014 at 3:51 PM, lee <lee@xxxxxxxxxxxxxxx> wrote:
Sophie <sophie@xxxxxxxxxxxx> writes:

> Our DBA team, who were new to XEN and visualization seem to have a
> heightened interest in XEN and have asked me this:
>
> ** Why don't we allocated 32 VCPUS to all virtual machines so that they
> can share all resources and when they need CPUs they can access those
> that were sitting idle ** Their logic was VCPUs could be better
> distributed like this.
>
> My question to you is what do you think?

That's what I thought would make sense :) ÂYou probably don't want to do
that because it can cause delays when so many vCPUs are supposed to do
something that there is a lack of pCPUs.

I think it's better to make sure that dom0 and whatever VM others may
have to wait on get sufficient CPU in the first place. ÂHow much CPU
that is depends on workload.

All the rest of it also depends on workload. ÂI have some VMs that
basically only need one CPU (or even less) because more CPUs doesn't
benefit what they are doing. ÂI gave them 2 vCPUs because it won't hurt
anything when they sometimes might use 2 pCPUs, just in case that there
is something they can do in parallel and so that other VMs need not wait
on them. ÂIf other VMs had to wait (like on the answer to a DNS query),
their vCPUs would have to be idle anyway and it won't hurt anything to
have the idle pCPU do something instead (like helping to answer the DNS
query faster).

So you probably want to monitor what each VM does with its CPUs.
Perhaps none of them needs 32, perhaps some run as well with less and
some benefit from having more. ÂGoing by that, you can try to achieve
some optimum by giving VMs as few vCPUs as needed and by giving
additional vCPUs to other VMs that actually take advantage of them.

Letting all pCPUs as vCPUs to all VMs probably doesn't work as well as
trying to achieve such an optimum.

Overcommitting CPUs works fine --- probably up to some point at which
the pCPUs can't keep up, and/or at which overall performance goes down
due to pCPUs needing to access too many different memory areas.

I guess assigning, in total, about 1.75 times as many vCPUs as pCPUs is
a good measure to start with. ÂOf course, it also depends on workload
and especially on timing ...

I'd start with assigning more vCPUs to the most busy VMs (provided that
the CPUs are actually used, and considering VMs that could become
bottlenecks) until 56 vCPUs (1.75x32) are assigned in total, see how it
goes and fine tune from there.

You might also want to tune the memory ...


--
Knowledge is volatile and fluid. ÂSoftware is power.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.