[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] One potential issue of shadow fault emulation
Hi, On Fri, Dec 21, 2007 at 10:58:49PM +0800, Jiang, Yunhong wrote: > Currently shadow fault handler try to emulate up to four extra > instruction for PAE guest, to reduce vmexit times. > > But there is a potential issue here: Consider the second instruction is > a change to virtual TPR register. In physical environment, if the TPR > acceleration is enabled, the cpu will try to access the > VIRTUAL_APIC_PAGE_ADDR set in the VMCS. However, when we do emulation, > we didn't cope with this situation, and will access the APIC_ACCESS_ADDR > page pointed by the shadow. This is sure cause problem to guest, usually > blue screen, and this issue will happen randomly depends on the content > in the apic access page. > > So how should we cope with such situation? Stop emulation or, continue > emulate , but access the virtual APIC page? Or any better idea? > > Thanks > -- Yunhong Jiang I don't know if I'm hitting the same problem, but I also have a pretty serious issue with changeset 15199. In our case we see RHEL3U8 and RHEL4U4 guests with at least 2 vcpus will consistently run into errors on boot when fscking like so: ===== EXT3-fs error (device ide0(3,3)): ext3_free_blocks: bit already cleared for block 66191 EXT3-fs error (device ide0(3,3)): ext3_free_inode: bit already cleared for inode 29267 EXT3-fs error (device ide0(3,3)): ext3_free_blocks: bit already cleared for block 66147 EXT3-fs error (device ide0(3,1)): ext3_free_inode: bit already cleared for inode 32388 ===== Dropping back to 1 vcpu or a uniprocessor kernel alleviates the problem. Bisecting we found that this was caused by cs 15199, and this was serious enough for us with 32bit PAE hvm guests that we ended up backing that changeset out for our release. Was in the middle of trying the same test out with 3.1.3-rc1-pre when I saw this thread :) Looks like 3.2 has a different behavior and ends up spinning at 100% cpu when fscking. The performance impact of dropping this patch is severe (pagefaults about 7x more expensive than the same 64bit kernel) so I'd like to help where I can on this. Thanks kurt -- _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |