[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Re: [libvirt] domain.info() sometimes returns state zero for running machines



Sorry but I misunderstood your last post completely :-D but now I tried what you said, using the CVS version. I also dumped the "ret" value from virXen_getdomaininfo. The output was simple: ret=1, domain_flags=0 (right after XEN_GETDOMAININFO_FLAGS!!!). Sadly I don't have much time to dig deeper but it seems to me that this happens very seldom or sometimes not at all (hard to debug).


Daniel P. Berrange wrote:
On Thu, Jun 18, 2009 at 09:40:08AM +0100, Andreas Sommer wrote:
I executed the commands "export LIBVIRT_DEBUG=1" and "virsh dominfo 2" (ID 2 was a running domU), this is the output:

-----------------------------------------------------------------------------------------
DEBUG: libvirt.c: virInitialize (register drivers)
DEBUG: xen_internal.c: xenHypervisorInit (Using new hypervisor call: 30002
)
DEBUG: xen_internal.c: xenHypervisorInit (Using hypervisor call v2, sys ver6 dom ver5
)
DEBUG: libvirt.c: virConnectOpenAuth (name=(null), auth=0xb7f49b60, flags=0)
DEBUG: libvirt.c: do_open (Probed xen:///)
DEBUG: libvirt.c: do_open (Probed qemu:///system)
DEBUG: libvirt.c: do_open (Using xen:/// as default URI, 2 hypervisor found)
DEBUG: libvirt.c: do_open (name "xen:///" to URI components:
 scheme xen
 opaque (null)
 authority (null)
 server (null)
 user (null)
 port 0
 path /
)
DEBUG: libvirt.c: do_open (trying driver 0 (Test) ...)
DEBUG: libvirt.c: do_open (driver 0 Test returned DECLINED)
DEBUG: libvirt.c: do_open (trying driver 1 (Xen) ...)
DEBUG: xen_unified.c: xenUnifiedOpen (Trying hypervisor sub-driver)
DEBUG: xen_unified.c: xenUnifiedOpen (Activated hypervisor sub-driver)
DEBUG: xen_unified.c: xenUnifiedOpen (Trying XenD sub-driver)
DEBUG: xen_unified.c: xenUnifiedOpen (Activated XenD sub-driver)
DEBUG: xen_unified.c: xenUnifiedOpen (Trying XS sub-driver)
DEBUG: xen_unified.c: xenUnifiedOpen (Activated XS sub-driver)
DEBUG: libvirt.c: do_open (driver 1 Xen returned SUCCESS)
DEBUG: libvirt.c: do_open (network driver 0 Test returned DECLINED)
DEBUG: libvirt.c: do_open (network driver 1 QEMU returned DECLINED)
DEBUG: remote_internal.c: doRemoteOpen (proceeding with name = xen:///)
DEBUG: libvirt.c: do_open (network driver 2 remote returned SUCCESS)
DEBUG: libvirt.c: do_open (storage driver 0 Test returned DECLINED)
DEBUG: libvirt.c: do_open (storage driver 1 storage returned DECLINED)
DEBUG: libvirt.c: do_open (storage driver 2 remote returned SUCCESS)
DEBUG: libvirt.c: virDomainLookupByID (conn=0x9b49048, id=2)
DEBUG: hash.c: __virGetDomain (New hash entry 0x9b580a0)
DEBUG: libvirt.c: virDomainGetID (domain=0x9b580a0)
DEBUG: libvirt.c: virDomainGetName (domain=0x9b580a0)
DEBUG: libvirt.c: virDomainGetUUIDString (domain=0x9b580a0, buf=0xbff74b07)
DEBUG: libvirt.c: virDomainGetUUID (domain=0x9b580a0, uuid=0xbff74aac)
DEBUG: libvirt.c: virDomainGetOSType (domain=0x9b580a0)
DEBUG: libvirt.c: virDomainGetInfo (domain=0x9b580a0, info=0xbff74b2c)
DEBUG: libvirt.c: virDomainGetAutostart (domain=0x9b580a0, autostart=0xbff74b44)
DEBUG: libvirt.c: virDomainFree (domain=0x9b580a0)
DEBUG: hash.c: virUnrefDomain (unref domain 0x9b580a0 ac06e4f0-59b1-11de-8a39-0800200c9a66 1) DEBUG: hash.c: virReleaseDomain (release domain 0x9b580a0 ac06e4f0-59b1-11de-8a39-0800200c9a66)
DEBUG: hash.c: virReleaseDomain (unref connection 0x9b49048 xen:/// 2)
DEBUG: libvirt.c: virConnectClose (conn=0x9b49048)
DEBUG: hash.c: virUnrefConnect (unref connection 0x9b49048 xen:/// 1)
DEBUG: hash.c: virReleaseConnect (release connection 0x9b49048 xen:///)
-----------------------------------------------------------------------------------------

This is pretty weird because there are no debugging messages from Xen functions?!

Did you make the code changes I suggested below to add in the DEBUG()
statements ? If so, then probably virsh is still using the system
installed libvirto.so rather than the new built one. If you run virsh
directly from the compiled source tree it'd probably work, eg ./src/virsh ...

Daniel P. Berrange wrote:
On Wed, Jun 17, 2009 at 04:04:20PM +0100, Andreas Sommer wrote:
I'm using Xen-3.2-1 on Debian 5.0.1-lenny and retrieve information about running domains using

domain.info()[0]

The domain object is retrieved via connection.lookupByUUIDString(...) and stored as a variable called "domain". Usually the running domains have the state 1 (VIR_DOMAIN_RUNNING) or 2 (VIR_DOMAIN_BLOCKED), but sometimes it happens that 0 (VIR_DOMAIN_NOSTATE) is returned. Why does that happen? I don't think it is an error because then it would've raised an exception...
I think it is most likely a bug in our handling of the state info
>from the hypervisor with certain Xen version. I'm fairly sure we
should never get NO_STATE  for any active domain

If you want to try and troubleshoot the code, then this handled in the
xenHypervisorGetDomInfo() method in src/xen_internal.c.

It currently does this:

   domain_flags = XEN_GETDOMAININFO_FLAGS(dominfo);
   domain_flags &= ~DOMFLAGS_HVM; /* Mask out HVM flags */
   domain_state = domain_flags & 0xFF; /* Mask out high bits */
   switch (domain_state) {
    ....
   }
.
Given that you see NO_STATE, I expect that none of the 'case' inside
the 'switch' are being matched. I'd be interested to know what the
'domain_state' value is immediately after its fetched from the HV.
So you might try changing it to
   domain_flags = XEN_GETDOMAININFO_FLAGS(dominfo);
   DEBUG("Raw HV state flag %x", domain_flags);
   domain_flags &= ~DOMFLAGS_HVM; /* Mask out HVM flags */
   domain_state = domain_flags & 0xFF; /* Mask out high bits */
   DEBUG("Masked HV state flag %x", domain_flags);
   switch (domain_state) {
    ....
   }
   DEBUG("libvirt state flag %x", info->state);

And then running 'LIBVIRT_DEBUG=1 virsh dominfo GUEST' and capturing the output when it reports 'nostate'

Daniel


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.