[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen/Kernel panic trying to mount raid1 root partition


  • To: <xen-users@xxxxxxxxxxxxxxxxxxxx>
  • From: "Marc Tousignant" <myrdhn@xxxxxxxxx>
  • Date: Tue, 12 Mar 2019 11:13:55 -0400
  • Delivery-date: Tue, 12 Mar 2019 15:15:11 +0000
  • List-id: Xen user discussion <xen-users.lists.xenproject.org>
  • Thread-index: AQDkYM/REOikB4BGEg1osBZfHPtTYqfo9Dpw

Going to be answering my own issue…

 

It was a stupid simple thing, once you identify it..

As it was 2 years, I’m not sure how I got XEN and mdadm working without an initrd, but my kernel config did not have a initrd built in to it to my knowledge… It was not in my running config… so I believe that.

I looked further in to the run without the initrd that was loading the ucode. I tried to boot that without xen and it still failed. Which proved to me that I needed the initrd, as booting mdadm seems to have moved to there in the past 2 years.

 

So, since I had a working kernel without calling XEN, I started with that. I then starting looking in to why the kernel was not loading the initrd, which seemed to be why it could not mount the RAID. What I found was that it seemed to be loading the ucode, and never the initrd, thus the failure.

I started looking into a way to put the ucode into the initrd, so I installed Dracut and manually built the inird, but that did not work right off the bat, so I continued looking.

I found that XEN added microcode support using ucode=, so I added that and switched back to my original initrd, but it had no effect. I was hoping it would load the microcode from the module line, but it seems it does not support that.

I then used the Dracut initrd, but that also failed for some reason. I had added ucode=auto to xen’s commands, and removed the ucode module line. Maybe I didn’t create it right, but it was also a lot larger than my original initrd.. And not just an increase based on the ucode

 

I then decided to try and build the ucode into the kernel itself, SUCCESS!!!

 

Now to finish rebuilding the rest of the system, and hoping I don’t have too much work to recreate/modify my config files..

 

MarcT

 

From: Xen-users <xen-users-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of Marc Tousignant
Sent: Tuesday, March 12, 2019 3:58 AM
To: xen-users@xxxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Xen/Kernel panic trying to mount raid1 root partition

 

I have a system that was running xen for 2 years without getting updated. Had no issues, but decided the machine needed to be upgraded so I threw in some new drives and got to work. Unfortunately, xen itself is not playing nice and is panicking being unable to load the md1 partition (mdadm raid1).

 

I’d have to throw the old drives back in to check which xen version it was running, as the grub config doesn’t tell me.. but here is the old grub config that worked.

menuentry "kernel-4.9.0-aufs" {

  insmod raid

  insmod mdraid

  insmod part_msdos

  insmod part_msdos

  insmod ext2

  set root=(md0)

  search --no-floppy --fs-uuid --set efb580c9-988d-48cf-8e2e-ba6fbea170e4

  multiboot /xen.gz loglvl=all guest_loglvl=all xsave=1

  module /kernel-4.9.0-aufs rootfstype=ext4 root=/dev/md1 iommu=1 xen-pciback.permissive xen-pciback.passthrough=1 xen-pciback.hide=(0000:04:00.0)(0000:04:00.1)(0000:04:00.2)(0000:04:00.3)

  set gfxpayload=keep

}

The PCI 0000:04 devices are NICs, so ignore that. But this worked for 2+ years without issue.

 

I used the running config from the old kernel to base the new one off of, so all my drivers and xen setup is exactly the same. But the new one will start xen and switch over to the kernel, but says VFS: Cannot open root device "md1" or unknown-block(0,0).

Here is that config.

menuentry "kernel-4.19.1-aufs" {

  insmod diskfilter

  insmod mdraid09

  insmod part_msdos

  insmod part_msdos

  insmod ext2

  set root=(md0)

  search --no-floppy --fs-uuid --set 954936b8-9e17-4a2a-b2c2-b15e7ced5ee8

  multiboot /xen.gz loglvl=all guest_loglvl=all xsave=1 iommu=1 iommu_inclusive_mapping=1 dom0_max_vcpus=2 dom0_vcpus_pin dom0_mem=4096M

  module /kernel-4.19.1-aufs xen-pciback.permissive xen-pciback.passthrough=1 xen-pciback.hide=(0000:04:00.0)(0000:04:00.1)(0000:04:00.2)(0000:04:00.3) root=/dev/md1 rootfstype=ext4 rand_id=P0UJKUSZ

  module /early_ucode.cpio

  set gfxpayload=keep

}

 

I’m on Funtoo/Gentoo so I also built the kernel using genkernel again, and it included the ramdrive. Here is the config for a working, non xen calling, boot.

menuentry "Funtoo Linux genkernel - kernel-genkernel-x86_64-4.19.1-aufs" {

  insmod diskfilter

  insmod mdraid09

  insmod part_msdos

  insmod part_msdos

  insmod ext2

  set root=(md0)

  search --no-floppy --fs-uuid --set 954936b8-9e17-4a2a-b2c2-b15e7ced5ee8

  linux /kernel-genkernel-x86_64-4.19.1-aufs domdadm real_root=/dev/md1 rootfstype=ext4 rand_id=H1BMLGDA

  initrd /early_ucode.cpio /initramfs-genkernel-x86_64-4.19.1-aufs

  set gfxpayload=keep

}

 

Yet, this similar xen config tells me it fails the boot as well, but says (null) for the root fs.

menuentry "Funtoo on Xen - kernel-genkernel-x86_64-4.19.1-aufs" {

  insmod diskfilter

  insmod mdraid09

  insmod part_msdos

  insmod part_msdos

  insmod ext2

  set root=(md0)

  search --no-floppy --fs-uuid --set 954936b8-9e17-4a2a-b2c2-b15e7ced5ee8

  multiboot /xen.gz loglvl=all guest_loglvl=all xsave=1 iommu=1 iommu_inclusive_mapping=1 dom0_max_vcpus=2 dom0_vcpus_pin dom0_mem=4096M

  module /kernel-genkernel-x86_64-4.19.1-aufs domdadm xen-pciback.permissive xen-pciback.passthrough=1 xen-pciback.hide=(0000:04:00.0)(0000:04:00.1)(0000:04:00.2)(0000:04:00.3) real_root=/dev/md1 rootfstype=ext4 rand_id=H1BMLGDA

  module /early_ucode.cpio

  module /initramfs-genkernel-x86_64-4.19.1-aufs

  set gfxpayload=keep

}

 

There has to be something stupid simple here that I am missing.

 

MarcT

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.