[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] yanked share, round 2



On 13 Jan 2006, at 20:09, Keir wrote:

>Just as a PCI device can be reset, we can kill an offensive device
>driver or other service domain and restart it. I think these problems
>need addressing in the tools / control plane, not with extra mechanism
in Xen.
>[snip]

Are you suggesting that operator intervention is the durable solution?
Surely, that doesn't wash. ;^)

>Even if you add mechanism such that the mapping domain is
>made accountable, what should its clients do when it runs
>out of memory and finally hits the brick wall?

Why should the mapping domain be in special danger of running out of
memory?  It is 'accountable' 1 for 1 with the number of pages it maps.
If a surprise unshare occurs, the mapping domain can unmap and robustly
recover the associated under-page.

Without advocating any particular approach, the architectural goal is
that either DomU can independently decouple itself from the other,
something not possible now.

-steve


-----Original Message-----
From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx] 
Sent: Friday, January 13, 2006 12:09 PM
To: King, Steven R
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] yanked share, round 2


On 13 Jan 2006, at 19:55, King, Steven R wrote:

> Hi Keir, I'm not familiar enough with zombie anatomy to help there, so

> let me try this reasoning: today's Xen architecture cannot promise 
> that a shared page can ever be returned to normal, non-shared service.
>  Thus, any DomU that routinely creates shared pages *must* eventually 
> run out of memory.  Of course if all DomU's try to play nice, it would

> take a long while.  We still face a snag in which Xen allows DomU 
> bugs, DomU crashes and DomU evil to accumulate over time.  By analogy 
> to the hardware world, a PCI device that could not promise to let go 
> of pages would be unacceptable.

Just as a PCI device can be reset, we can kill an offensive device
driver or other service domain and restart it. I think these problems
need addressing in the tools / control plane, not with extra mechanism
in Xen. At the end of the day, the memory backing these bogus/buggy
mappings has to come from somewhere. Even if you add mechanism such that
the mapping domain is made accountable, what should its clients do when
it runs out of memory and finally hits the brick wall?

  -- Keir

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.