[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] PV guests concurrent running limit

On 29/08/16 10:25, Kun Cheng wrote:
> OK. That sounds weird. Anything suspicious from xenstore or xl log? I
> cannot reproduce it here as I don't have a machine that could host so
> many guests...    
> On Sat, Aug 27, 2016 at 8:31 PM Andrey Schmiegelow <asw@xxxxxxxxx
> <mailto:asw@xxxxxxxxx>> wrote:
>     Hi Kun Cheng.
>     I´m sure there are enough resources. While running those 50 PV
>     guests I can deploy others guest in HVM mode.
>     The problem raises only when I try to deploy another PV guest
>     (beyond the 50).

Hmm, can you please share a typical guest config (pv and hvm, please)?

>>     On Sat, Aug 27, 2016 at 1:12 AM Andrey Schmiegelow <asw@xxxxxxxxx
>>     <mailto:asw@xxxxxxxxx>> wrote:
>>         Hello xen users
>>         I'm experiencing problems when trying to deploy more than 50
>>         paravirtualized guests.
>>         No relevant messages in logs I found.
>>         Follow xl-guest log
>>         ==========================================
>>         Waiting for domain smtp-051 (domid 55) to die [pid 14104]
>>         Domain 55 has shut down, reason code 3 0x3
>>         Action for shutdown reason code 3 is destroy
>>         Domain 55 needs to be cleaned up: destroying the domain
>>         Done. Exiting now
>>         =========================================

Shutdown reason code 3 is "crash". So I guess the domain itself decided
to die. You could add:


to the guest config. This will avoid destroying the domain when a crash
occurs. Further investigation could be rather hard, though. Maybe you
are able to extract some more information form the dead domain via

/usr/lib/xen/bin/xenctx -a <domid> 0

which will print the register contents and some stack information of the
domain (vcpu 0 in this case, assuming your domain has only 1 vcpu).

Can you add some domain kernel parameters to increase verbosity?


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.