[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] xs: use system's default stack size for xs_watch's reader thread



On Wed, Sep 21, 2016 at 09:50:30AM -0400, Chris Patterson wrote:
> On Wed, Sep 21, 2016 at 9:00 AM, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> > On Wed, Sep 21, 2016 at 08:51:07AM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Sep 20, 2016 at 05:29:39PM -0400, Chris Patterson wrote:
> >> > From: Chris Patterson <pattersonc@xxxxxxxxxxxx>
> >> >
> >> > xs_watch() creates a thread to listen to xenstore events.  Currently, the
> >> > thread is created with the greater of 16K or PTHREAD_MIN_SIZE.
> >> >
> >> > There have been several bug reports and workarounds related to the issue
> >> > where xs_watch() fails because its attempt to create the reader thread 
> >> > with
> >> > pthread_create() fails. This is due to insufficient stack space size
> >> > given the requirements for thread-local storage usage in the applications
> >> > and libraries that are linked against libxenstore.  [1,2,3,4].
> >> >
> >> > Specifying the stack size appears to have been added to reduce memory
> >> > footprint (1d00c73b983b09fbee4d9dc0f58f6663c361c345).
> >>
> >> Ugh. 8MB.
> >
> > OOI isn't that 8MB virtual memory, which means it shouldn't have real
> > impact unless it is used?

/me nods. That is my recollection too. But it does mean that 'top'
shows the application as bigger (by 8MB).

> >
> 
> >From what I understand, that is correct.  At least in the Linux/glibc
> case, I believe the stack is allocated using anonymous mmap() and that
> resident memory usage shouldn't be greater than what you actually end
> up writing.  However, I do not know if this holds true universally...

That should be faily easy to find out. One just needs to check
the RSS size. Not sure how to do that on FreeBSD thought..

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.