[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RES: [Xen-users] Problems with FC4 and NFS Root
This seems not to be the problem. To fix this, I could simple build an initrd image with the NFS modules. Another user replied to me saying that he is using the same kernel-xenU (FC4) and having the same problem (hanging after the IPSec message), but he is loading the root file system from a physical device. Anyone has ideas on what might been going on? I'm guessing the problem is related to this kernel and IPSec. Anyone has tips on how I can debug this problem? Thanks -----Mensagem original----- De: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] Em nome de Minoru Kato Enviada em: segunda-feira, 8 de agosto de 2005 23:03 Para: xen-users@xxxxxxxxxxxxxxxxxxx Assunto: Re: [Xen-users] Problems with FC4 and NFS Root Hi, vmlinuz-2.6.11-1.1369_FC4xenU has NFS-FS by modules. (not included statically) Therefore, it cannot start by NFS-Root. --- kato > Hello everyone, > > I've been working with Xen and Debian for a while, but recently decided to > give it a try with Fedora Core 4 and NFS Root based fs. > > I've installed the FC4 base system and the following packets with their > dependencies: > - xen > - kernel-xen0 > - kernel-xenU > > After setting my new default option in /etc/grub/menu.lst and starting > 'xend', I've created a directory named "/vm" and copied the FC4 base > installation into there. > > Then I've changed the /vm/etc/fstab to something like this: > 10.0.0.1:/vm / nfs defaults 0 0 > /dev/devpts /dev/pts devpts gid=5,mode=620 0 0 > /dev/shm /dev/shm tmpfs defaults 0 0 > /dev/proc /proc proc defaults 0 0 > /dev/sys /sys sysfs defaults 0 0 > LABEL=SWAP-hda2 swap swap defaults 0 0 > > And created a /root/vm.swp swap file with something like this: > # dd if=/dev/zero of=/root/vm.swp bs=1024 count=262144 > # mkswap /root/vm.swp > > My configuration file is looking like this: > kernel = "/boot/vmlinuz-2.6.11-1.1369_FC4xenU" > memory = 64 > name = "vm" > vif = [ 'mac=aa:00:00:00:00:f4, bridge=xen-br0' ] > disk = [ 'file:/root/vm.swp,hda2,w' ] > dhcp="dhcp" > root = "/dev/nfs" > nfs_server = '10.0.0.1' > nfs_root = '/vm' > extra = "4" > restart = 'never' > > I've also set /etc/exports to export /vm with the following options: > /vm 10.0.0.0/24(rw,sync,no_root_squash) > > And I've started the NFS daemon with "service nfs start". > > However, when booting the guest with "xm create -c vm.cfg" I get a crash > after IPsec netlink socket initialization. > > Also note that when I've first installed the FC4, I've disabled SElinux: > # grep -v "^#" /etc/selinux/config > SELINUX=disabled > SELINUXTYPE=targeted > # > > Any ideas on what might be happening? > > Here is the boot sequence before the hang: > # xm create -c vm.cfg > Using config file "vm.cfg". > Started domain vm, console on port 9604 > ************ REMOTE CONSOLE: CTRL-] TO QUIT ******** > Linux version 2.6.11-1.1369_FC4xenU (bhcompile@xxxxxxxxxxxxxxxxxxxxxxxxxx) > (gcc version 4.0.0 20050525 (Red Hat 4.0.0-9)) #1 SMP Thu Jun 2 23:33:51 EDT > 2005 > BIOS-provided physical RAM map: > Xen: 0000000000000000 - 0000000004000000 (usable) > 64MB LOWMEM available. > Using x86 segment limits to approximate NX protection > DMI not present. > IRQ lockup detection disabled > Allocating PCI resources starting at 04000000 (gap: 04000000:fc000000) > Built 1 zonelists > Kernel command line: ip=:10.0.0.1::::eth0:dhcp root=/dev/nfs > nfsroot=10.0.0.1:/vm1 4 > Initializing CPU#0 > PID hash table entries: 1024 (order: 10, 16384 bytes) > Xen reported: 1595.334 MHz processor. > Using tsc for high-res timesource > Dentry cache hash table entries: 16384 (order: 4, 65536 bytes) > Inode-cache hash table entries: 8192 (order: 3, 32768 bytes) > Memory: 60800k/65536k available (1785k kernel code, 4648k reserved, 506k > data, 156k init, 0k highmem) > Checking if this processor honours the WP bit even in supervisor mode... Ok. > Security Framework v1.0.0 initialized > SELinux: Initializing. > SELinux: Starting in permissive mode > selinux_register_security: Registering secondary module capability > Capability LSM initialized as secondary > Mount-cache hash table entries: 512 > CPU: Trace cache: 12K uops, L1 D cache: 8K > CPU: L2 cache: 256K > Enabling fast FPU save and restore... done. > Enabling unmasked SIMD FPU exception support... done. > Checking 'hlt' instruction... disabled > CPU0: Intel(R) Pentium(R) 4 CPU 1.60GHz stepping 02 > per-CPU timeslice cutoff: 731.13 usecs. > task migration cache decay timeout: 1 msecs. > SMP motherboard not detected. > smpboot_clear_io_apic_irqs > Brought up 1 CPUs > softlockup thread 0 started up. > NET: Registered protocol family 16 > xen_mem: Initialising balloon driver. > Grant table initialized > audit: initializing netlink socket (disabled) > audit(1123554105.008:1): initialized > Total HugeTLB memory allocated, 0 > VFS: Disk quotas dquot_6.5.1 > Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) > SELinux: Registering netfilter hooks > Initializing Cryptographic API > ksign: Installing public key data > Loading keyring > - Added public key 42BD35A990375F72 > - User ID: Red Hat, Inc. (Kernel Module GPG key) > io scheduler noop registered > io scheduler anticipatory registered > io scheduler deadline registered > io scheduler cfq registered > RAMDISK driver initialized: 16 RAM disks of 16384K size 1024 blocksize > Xen virtual console successfully installed as tty1 > Event-channel device installed. > Blkif frontend is using grant tables. > xen_blk: Initialising virtual block device driver > xen_net: Initialising virtual ethernet driver. > md: md driver 0.90.1 MAX_MD_DEVS=256, MD_SB_DISKS=27 > NET: Registered protocol family 2 > IP: routing cache hash table of 256 buckets, 4Kbytes > TCP established hash table entries: 4096 (order: 4, 65536 bytes) > TCP bind hash table entries: 4096 (order: 3, 49152 bytes) > TCP: Hash tables configured (established 4096 bind 4096) > Initializing IPsec netlink socket > <...and it simply stays here...> > > Best Regards, > Felipe > > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-users > _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |