[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Query on Roadmap items

  • To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Arjun <cse.syslab@xxxxxxxxx>
  • Date: Tue, 12 Feb 2008 02:51:08 -0500
  • Delivery-date: Mon, 11 Feb 2008 23:51:35 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type; b=KU1NKWFf5kwTcpQxvIJQhCGQFETgMdhZm4dncCdaS3t26CLv0tiGcGIiYs0SYxRw5Ppxi2cg//IBScFHSvH6aKH5wnsyWrSwLFXGxiHeAuBrfkjqF2zM5makBkZOTWnJbkZZVS/cEeBb1uLWl6r3ScPKPRxHh3mO1qE4VzkrhYE=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hello to all the Xen Developers,

I'm a grad student (MS) and have worked with Xen before, though only with
the (older) scheduler. I am interested in doing a Xen related project as part
of my MS thesis/project work and recently went throught the Xen Roadmap items
to look for something doable in about 2-3 months (working full time). I would
greatly appreciate some advice and guidance on this subject.

Here are some options for my MS project, (my questions are embedded) :

1) Automatic Memory Balancing:
Has the memory "balancing" item from the roadmap been done or is still open ?
Here is the excerpt from the Xen roadmap document  page 30, July 2006:
"When using `xm mem-set' commands to control the amount of memory in a
guest its currently quite easy to set the target too low and create a `memory
crunch' that causes a linux guest kernel to run the infamous `oomkiller' and
hence render the system unstable. It would be far better if the interaction
between the balloon driver and linux's memory manager was more forgiving,
hence causing the balloon driver to `back off', or ask for more memory back
from xen to alleviate the pressure (up to the current `mem-max' limit).
The hard part here is deciding what in the memory management system to
trigger off at the point where the oom killer runs the system is typically
already unusable, so we want to be able to get in there earlier."
This appears to be the same item as what is listed in the Xen Roadmap list at
as : "improve interaction between balloon driver and page allocator" on
page 12 of the pdf slides.
I also saw some recent emails on this on the xen-dev list.
My questions are : Is this being done by someone already or is it still open ?
If so, then I'd like to take it on as a project ? Who could I discuss this with ?

From what I understand, the purpose of this is that real (physical) memory would be shared more equally (fairly ?) between VMs, or according to a set policy.
Also, this might make it easier to live-migrate VMs onto a host with existing VMs
and also starting VMs on a host would be slightly easier since the existing vms may not
need to have their memory allocations manually reconfigured. It this correct ?
Could someone please explain the problem and the benefits of solving it in detail ?

2) Gang Scheduling / Coscheduling VMs across a single host or even separate hosts :

I saw a few emails on this on the Xen-dev list and it seems some people may have a
need for this. Is it worthwhile pursuing this - i.e. make changes to the credit
scheduler to enable such scheduling ?

3) I'm also open to any other ideas so please feel free to suggest.


P.S. On a related topic to 1): From what I understand, Xen currently does not support
Memory overcommitting (like VMWare ESX server does). Is there a plan to implement this
sometime ?

Thanks in advance for your patience and advice.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.