[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [RFC] Transcendent Memory ("tmem"): a new approach to physical memory management



This discussion probably belongs in lkml rather than
xen-devel but since we've started here already...

> > SSD interface or might help for hot-swap memory.
> 
> Not something I'd thought about. The problem with hot swap is 
> generally
> one of managing to get stuff removed from a given physical 
> page of RAM.
> Having more flexible allocators probably helps there simply 
> because you
> can make space to relocate pages underneath the guest.

Hot-swap:  What I have in mind is as follows (and I'm
talking about native kernel, no Xen here): Hot-delete
requires some kind of kernel notification (provoked by
operator or hardware) that says "this physical address
range is going to disappear soon" at which point the kernel
will try to abandon that piece of memory.  Between the time
of the notification and actual disappearance, which may
be a fairly long time!, that memory goes unused.  During
that time period, that memory could be configured and
used as a precache pool, clean pages only, so when the
actual removal event happens, no valuable data is lost,
but in the meantime the memory isn't wasted.

SSD: Pardon my ignorance, but will SSD ever be fast enough
to be used as slow RAM (e.g. synchronously accessed, but
still classified as second-class RAM)?  If so, hiding it
from guests and only allowing it to be used via the tmem
interface might be a nice way to get benefits of SSD without
the major kernel changes required to deal with two
classes (normal and slow) RAM.

> > I also think it might be used like compcache, but with
> > the advantage that clean page cache pages can also be
> > compressed.
> 
> Would it be useful to distinguish between pages the OS 
> definitely doesn't
> care about (freed) and pages that can vanish, at least in terms of
> reclaiming them between guests. It seems that truely free 
> pages are the
> first target and there may even be a proper heirarchy.

I think this would be useful for periods of time where a
guest is "down-revving" from very busy to idle because it
would more proactively notify the hypervisor that a lot
of memory is available, without waiting for whatever
automated-ballooning to notice.  However if the memory
is only temporarily free (say between compiles in a make),
the information might be misleading.  It would be interesting
to study the distribution of lengths of time between when:

1) a page is last written
2) the page is "repurposed" (or freed?) in the kernel
3) the page is written again

In the time between (2) and (3), the page is "idle" and
if the average interval is long enough, certainly the
page-worth of memory could be reclaimed by the hypervisor
for another guest.  (Does KVM do this already instead
of ballooning?)

> first target and there may even be a proper heirarchy.

I definitely agree that there is a proper hierarchy and
that better taxonomizing within the kernel will pay off
sooner or later, at least in virtualized environments.

Thanks!
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.