[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] domU kernel from source capable of live migration
On Mon, 2011-02-21 at 12:50 -0500, William L. Thomson Jr. wrote: > > Now some what unrelated I have a major problem with 2.6.37, where there > are issues with PMU. Which prevents networking from working, and the > system from mounting nfs root. This is not related to live migration, > just preventing me from trying newer sources that might be capable of > live migration. Likely a upstream issue not xen related, not sure. > > On 2.6.36 I get the following for PMU > AMD PMU driver. > ... version: 0 > ... bit width: 48 > ... generic registers: 4 > ... value mask: 0000ffffffffffff > ... max period: 00007fffffffffff > ... fixed-purpose events: 0 > ... event mask: 000000000000000f > > I use the same config from 2.6.36, but in 2.6.37 and now get > > Broken PMU hardware detected, software events only. Not sure if this is xen related or not, but I am getting the same with 2.6.38-rc6 Thinking I might need to take this up with kernel.org. Came across a post on RH's bugzilla that is similar[1]. Doesn't look to be using xen, not sure about other virtualization, and looks like networking might have been working for them. My missing PMU or it being detected as broken I am pretty sure is preventing my domU from being able to get networking up and running. Might be something else, not sure. Thats the only difference I can see in dmesg between a working 2.6.36 kernel and the non-working 2.6.37 and 2.6.38-rc6 kernels. Same config, but on any kernel past 2.6.36 I get no network, mounting of nfsroot fails, and no traffic is seen on the network coming from that domU in an attempt to mount nfsroot. 1. https://bugzilla.redhat.com/show_bug.cgi?id=676527 -- William L. Thomson Jr. Obsidian-Studios, Inc. http://www.obsidian-studios.com _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |