[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-bugs] [Bug 1773] new 3.0.0 kernel cannot connect to network, Device 0 (vif)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1773 ------- Comment #18 from johneed@xxxxxxxxxxx 2011-07-29 13:02 ------- sure; simply looked through the xend-config.sxp and found the kernel arg setting that I hoped for. Domain-0 0 7473 4 r----- 129.7 This is what I concluded was problematic. I have 8 gig of ram, but dom0 has claimed 7.473 Gb. Now booting the other kernel, didn't have this setting. It seemed any attempt to invoke xm mem-set with this kernel resulted and the system being overtaken by renegade xen processes, well processes anyway, having to hard rest. Having no control in bash or anything, seeking and killing processes was ruled out. So I nanoed xend-config.sxp; drew on " Additionally you should use dom0_mem = <-Value> as a parameter in # xen kernel to reserve the memory for 32-bit paravirtual domains, default # is "0" (0GB). (total_available_memory 8GB) " So actually I changed two things changed (total_available_memory 0) to (total_available_memory 8GB) and added dom0_mem=1.5GB initially to /boot/xen-version.gz, which was unhelpful, so added it to module /boot/kernel-3.0.0-gentoo-amd64 root=/dev/sda3 ro console=tty0 which was where it belongs. Memory settings is not my forte, but have had to adjust them in the past. I guessed this (total_available_memory 0) was unhelpful. > new flaw. Then open a new bug. Don't pollute this bug. I need not have entered that, should have left it out all together. It was spurious. Thanks heaps for the prompt assistance. Is that clear enough? -- Configure bugmail: http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. _______________________________________________ Xen-bugs mailing list Xen-bugs@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-bugs
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |