[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Error: Device 2049 (vbd) could not be connected (only when in auto/)



Ewan

Ewan Mellor <ewan@xxxxxxxxxxxxx> wrote on 02/10/2006 02:43:43 PM:

> On Fri, Feb 10, 2006 at 02:05:06PM -0500, John S Little wrote:
> 
> > When I have a domU that resides in the auto directory I get 'Error: 
Device 
> > 2049 (vbd) could not be connected' and the domain starts in a paused 
mode. 
> >  I am using Xen 3.0.1.  Shutting down domU and restarting has the same 

> > effect.  Following is some output from the xm log and an attempted 
> > restart.
> > 
> > On boot I get the following error:
> > Restoring Xen domains: xen0vm0-64Error: not a valid guest state file: 
pfn 
> > count read!.
> 
> This is a corrupt save file.  Given that you are having a timeout
> shutting down, I would say that it's most likely that the shutdown of
> the domain is failing, and this is leaving a half-finished save file
> lying around.
> 
> > Directly after dom0 reboot:
> > 
> > xen0:/etc/xen/scripts # xm list
> > Name                              ID Mem(MiB) VCPUs State  Time(s)
> > Domain-0                           0      251     4 r-----   329.4
> > xen0vm0-64                         4      256     1 --p---     0.0
> > 
> > 
> > >From the logs after dom0 reboot:
> > 
> > [2006-02-08 12:03:00 xend] ERROR (SrvBase:87) Request wait_for_devices 

> > failed.
> > Traceback (most recent call last):
> >   File "/usr/lib64/python/xen/web/SrvBase.py", line 85, in perform
> >     return op_method(op, req)
> >   File "/usr/lib64/python/xen/xend/server/SrvDomain.py", line 72, in 
> > op_wait_for_devices
> >     return self.dom.waitForDevices()
> >   File "/usr/lib64/python/xen/xend/XendDomainInfo.py", line 1350, in 
> > waitForDevices
> >     self.waitForDevices_(c)
> >   File "/usr/lib64/python/xen/xend/XendDomainInfo.py", line 979, in 
> > waitForDevices_
> >     return self.getDeviceController(deviceClass).waitForDevices()
> >   File "/usr/lib64/python/xen/xend/server/DevController.py", line 134, 
in 
> > waitForDevices
> >     return map(self.waitForDevice, self.deviceIDs())
> >   File "/usr/lib64/python/xen/xend/server/DevController.py", line 169, 
in 
> > waitForDevice
> >     raise VmError("Device %s (%s) could not be connected.\n%s" %
> > VmError: Device 2049 (vbd) could not be connected.
> > Device /dev/xensan/xenvm1-64 is mounted in a guest domain,
> > and so cannot be mounted now.
> > 
> > And an attempt at shutting down the domU and restarting:
> > 
> > xen0:/etc/xen/scripts # xm unpause xen0vm0-64
> > xen0:/etc/xen/scripts # xm shutdown xen0vm0-64
> > xen0:/etc/xen/scripts # xm list
> > Name                              ID Mem(MiB) VCPUs State  Time(s)
> > Domain-0                           0      251     4 r-----   326.4
> > xen0:/etc/xen/scripts # xm create -c ../auto/xen0vm0-64
> > Using config file "../auto/xen0vm0-64".
> > Error: Device 2049 (vbd) could not be connected.
> > Device /dev/xensan/xenvm1-64 is mounted in a guest domain,
> > and so cannot be mounted now.
> 
> This is a different bug, I think.

That is probably true.  I made some adjustments to the 
/etc/sysconfig/xendomains file and happened to be watching it at the 
machine console instead of over ssh when I saw that message on a reboot. 
When I checked after receiving your email xm list showed only dom0 but 
nothing for  domU as paused.
 
> For both of these, could you use xen-bugtool to submit your logs so that 
I can
> take a look?

Yes I have submitted bug #527 for vbd could not be connected and #528 for 
the pfn count.  I ran xen-bugtool against #528 and attached it to same.  I 
will recreate #527 and do the same for it.

In either case it seems as if changing the /etc/sysconfig/xendomains 
settings causes on or the other to show up.
 
> Thanks,
> 
> Ewan.

Regards,

John 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.