[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Hypervisor crash(!) on xl cpupool-numa-split
On Mon, Jan 31, 2011 at 2:59 PM, Andre Przywara <andre.przywara@xxxxxxx> wrote: > Right, that was also my impression. > > I seemed to get a bit further, though: > By accident I found that in c/s 22846 the issue is fixed, it works now > without crashing. I bisected it down to my own patch, which disables the > NODEID_MSR in Dom0. I could confirm this theory by a) applying this single > line (clear_bit(NODEID_MSR)) to 22799 and _not_ seeing it crash and b) by > removing this line from 22846 and seeing it crash. > > So my theory is that Dom0 sees different nodes on its virtual CPUs via the > physical NodeID MSR, but this association can (and will) be changed every > moment by the Xen scheduler. So Dom0 will build a bogus topology based upon > these values. As soon as all vCPUs of Dom0 are contained into one node (node > 0, this is caused by the cpupool-numa-split call), the Xen scheduler somehow > hicks up. > So it seems to be bad combination caused by the NodeID-MSR (on newer AMD > platforms: sockets C32 and G34) and a NodeID MSR aware Dom0 (2.6.32.27). > Since this is a hypervisor crash, I assume that the bug is still there, only > the current tip will make it much less likely to be triggered. > > Hope that help, I will dig deeper now. Thanks. The crashes you're getting are in fact very strange. They have to do with assumptions that the credit scheduler makes as part of its accounting process. It would only make sense for those to be triggered if a vcpu was moved from one pool to another pool without the proper accounting being done. (Specifically, each vcpu is classified as either "active" or "inactive"; and each scheduler instance keeps track of the total weight of all "active" vcpus. The BUGs you're tripping over are saying that this invariant has been violated.) However, I've looked at the cpupools vcpu-migrate code, and it looks like it does everything right. So I'm a bit mystified. My only thought is if possibly a cpumask somewhere that wasn't getting set properly, such that a vcpu was being run on a cpu from another pool. Unfortunately I can't take a good look at this right now; hopefully I'll be able to take a look next week. Andre, if you were keen, you might go through the credit code and put in a bunch of ASSERTs that the current pcpu is in the mask of the current vcpu; and that the current vcpu is assigned to the pool of the current pcpu, and so on. -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |