[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] XEN Proposal

  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxxxxxxx>
  • Date: Wed, 10 Dec 2008 14:10:25 +0100
  • Delivery-date: Wed, 10 Dec 2008 05:11:00 -0800
  • Domainkey-signature: s=s768; d=fujitsu-siemens.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:Subject:X-Enigmail-Version: Content-Type:Content-Transfer-Encoding; b=YQ2yrU9VahIvarrPqg3vtm8fIT4SjlnjzEMJybYuuxTyF70ziRZw+tV9 glDwzfZWh1Sk35lfyzzr4U7MQtzHmYxjB1N/EpThy0/kerUX44f3DWqgD 88jJXNeDhlZBKcK;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>


Currently the XEN credit scheduler has its pitfalls in supporting weights of
domains together with cpu pinning (see the threads
which include a rejected patch).

We are facing this problem, too. We tried the above patch, but it didn't solve
our problem completely, so we decided to start a new solution.

Our basic requirement is to limit a set of domains to a set of physical cpus
while specifying the scheduling weight for each domain. The general (and in my
opinion best) solution would be the introduction of a "pool" concept in XEN.

Each physical cpu is dedicated to exactly one pool. At start of XEN this is
pool0. A domain is member of a single pool (dom0 will always be member of
pool0), there may be several domains in one pool. Scheduling does not cross
pool boundaries, so the weight of a domain is only related to the weight of
the other domains in the same pool. So it is possible to have an own scheduler
for each pool.

What changes would be needed?
- The hypervisor must be pool-aware. It needs information about the pool
  configuration (cpu mask, scheduler) and the pool membership of a domain.
  The scheduler must restrict itself to its own pool only.
- There must be an interface to set and query the pool configuration.
- At domain creation the domain must be added to a pool.
- libxc must be expanded to support the new interfaces.
- xend and the xm command must support pools, defaulting to pool0 if no pool
  is specified

The xm commands could look like this:
xm pool-create pool1 ncpu=4              # create a pool with 4 cpus
xm pool-create pool2 cpu=1,3,5           # create a pool with 3 dedicated cpus
xm pool-list                             # show pools:
  pool      cpus          sched      domains
  pool0     0,2,4         credit     0
  pool1     6-9           credit     1,7
  pool2     1,3,5         credit     2,3
xm pool-modify pool1 ncpu=3              # set new number of cpus
xm pool-modify pool1 cpu=6,7,9           # modify cpu-pinning
xm pool-destroy pool1                    # destroy pool
xm create vm5 pool=pool1                 # start domain in pool1

There is much more potential in this approach:
- add memory to a pool? Could be interesting for NUMA
- recent discussions on xen-devel related to scheduling (credit scheduler for
  client virtualization) show some demand for further work regarding priority
  and/or grouping of domains
- this might be an interesting approach for migration of multiple related
  domains (pool migration)
- move (or migrate?) a domain to another pool
- ...

Any comments, suggestions, work already done, ...?
Otherwise we will be starting our effort soon.


Juergen Gross                             Principal Developer
IP SW OS6                      Telephone: +49 (0) 89 636 47950
Fujitsu Siemens Computers         e-mail: juergen.gross@xxxxxxxxxxxxxxxxxxx
Otto-Hahn-Ring 6                Internet: www.fujitsu-siemens.com
D-81739 Muenchen         Company details: www.fujitsu-siemens.com/imprint.html

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.