[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] Post upgrade to xcp 1.5 some VM's "Boot Device: Hard drive - failure: could not read boot disk"



I've tried the PV-* options below and am surprised to find no change in 
behavior. Is there some place in the dom0 logs I should see references to the 
dom0 provided kernel and initrd being loaded or provided to the guest? (I've 
tried with no path, /boot/guest and no change....)

 

 
On Sep 5, 2012, at 11:43 AM, George Shuklin wrote:

> Okay, I don't know anything about HVM, but PV is much more interesting.
> 
> You need to check if vm is running or not (is that message from virtual 
> machine or from some component of xapi).
> 
> There is one dirty but very nice way:
> 
> xe vm-start vm=... on host=(here); /etc/init.d/xapi stop
> 
> after that dying domain will stay in list_domains with -d- status.
> 
> If not, that means domain dying instantly or do not start at all.
> 
> Other trick is to try to boot with external kernel (PV-bootloader="", 
> PV-kernel=..., PV-ramdisk=..., and kernel/ramdisk somewhere in /boot/guest in 
> dom0).
> 
> 05.09.2012 18:13, Nathanial Byrnes ÐÐÑÐÑ:
>> These are PV guests. The appropriate VBD (in some cases (that work) there 
>> are more than one VBD) is set to bootable. The HVM-boot-{policy,params} are 
>> the same for working and non-working pv domU's for what it's worth.
>> 
>>      Thanks,
>>      Nate
>> 
>> 
>> On Sep 5, 2012, at 10:00 AM, George Shuklin wrote:
>> 
>>> Your are talking about HVM or PV guests?
>>> 
>>> Not sure if this somehow related to that problem, but here some vm/vbd 
>>> attributes to play with:
>>> 
>>> vbd:
>>> bootable=true/false
>>> 
>>> vm:
>>> HVM-boot-policy (separate PV from HVM)
>>> HVM-boot-params
>>> 
>>> 
>>> 05.09.2012 16:37, Nathanial Byrnes ÐÐÑÐÑ:
>>>> Hello,
>>>>    I have recently done a number of bad things to my XCP 1.0 environment. 
>>>> I believed most of them sorted. Then I upgraded from XCP 1.0 to 1.5 by way 
>>>> of 1.1. The bad things involved moving the shared storage backend from NFS 
>>>> to Glusterfs, monkeying with the SR and its PBD's, losing all the vm vbd's 
>>>> in the process having to manually find and remap the VDI's to the correct 
>>>> VM. Once I survived all of that self induced unpleasantness, I decided to 
>>>> upgrade to 1.5.... (obviously a genius behind this keyboard) After the 
>>>> upgrade some VM's boot and run as before, but others attempt to boot, then 
>>>> the console shows the subject message and shut down after 30 seconds. 
>>>> Please note that the functioning VM's are from the name SR/PBD as the 
>>>> non-functioning ones. Also, I can attach the non-booting vdi's to Dom0 and 
>>>> mount/fdisk them without issue. My question is: how do I further 
>>>> interrogate / investigate this boot process failure and success to ID the 
>>>> source of the issue?
>>>> 
>>>>    Thanks very much in advance.
>>>> 
>>>>    Regards,
>>>>    Nate
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Xen-api mailing list
>>>> Xen-api@xxxxxxxxxxxxx
>>>> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
>>> 
>>> 
>>> _______________________________________________
>>> Xen-api mailing list
>>> Xen-api@xxxxxxxxxxxxx
>>> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
> 
> 


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.