[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: VM hung after running sometime



 On 09/25/2010 03:40 AM, wei song wrote:
>
> Hi Jeremy,
>
> Do you think| this issue is cased of without CONFIG_X86_F00F_BUG|.

F00F_BUG? Why would that be related? F00F itself should be irrelevant in
any Xen situation, since the bug only affects P5 processors which I
assume you're not using (and I don't think are supported under Xen).

J

>
> thanks,
> James
>
> 在 2010年9月25日 下午5:33,MaoXiaoyun <tinnycloud@xxxxxxxxxxx
> <mailto:tinnycloud@xxxxxxxxxxx>>写 道:
>
>     Hi Jeremy:
>
>     The test of irqbalance disabled is running. Currently one server
>     was crashed on NIC.
>     Trace.jpg in attachments is the screenshot from serial port, and
>     trace.txt is from /varl/log/message.
>     Do you think it has connection with irqbalance disabled, or some
>     other possibilities?
>
>     In addition, I find in /proc/interrupts, all interrupts are
>     happend on cpu0(please refer to interrputs.txt
>     attached). Could it be a possible cause of server crash, and is
>     there a way I can configure manually to
>     distribute those interrupts evenly?
>
>     Meanwhile, I wil start the new test with kernel patched soon. Thanks.
>
>     > Date: Thu, 23 Sep 2010 16:20:09 -0700
>
>     > From: jeremy@xxxxxxxx <mailto:jeremy@xxxxxxxx>
>     > To: tinnycloud@xxxxxxxxxxx <mailto:tinnycloud@xxxxxxxxxxx>
>     > CC: xen-devel@xxxxxxxxxxxxxxxxxxx
>     <mailto:xen-devel@xxxxxxxxxxxxxxxxxxx>; keir.fraser@xxxxxxxxxxxxx
>     <mailto:keir.fraser@xxxxxxxxxxxxx>
>
>     > Subject: Re: [Xen-devel] Re: VM hung after running sometime
>     >
>     > On 09/22/2010 05:55 PM, MaoXiaoyun wrote:
>     > > The interrputs file is attached. The server has 24 HVM domains
>     > > runnning about 40 hours.
>     > >
>     > > Well, we may upgrade to the new kernel in the further, but
>     currently
>     > > we prefer the fix has least impact on our present server.
>     > > So it is really nice of you if you could offer the sets of
>     patches,
>     > > also, it would be our fisrt choice.
>     >
>     > Try cherry-picking:
>     > 8401e9b96f80f9c0128e7c8fc5a01abfabbfa021 xen: use percpu
>     interrupts for
>     > IPIs and VIRQs
>     > 66fd3052 fec7e7c21a9d88ba1a03bc062f5fb53d xen: handle events as
>
>     > edge-triggered
>     > 29a2e2a7bd19233c62461b104c69233f15ce99ec xen/apic: use
>     handle_edge_irq
>     > for pirq events
>     > f61692642a2a2b83a52dd7e64619ba3bb29998af xen/pirq: do EOI
>     properly for
>     > pirq events
>     > 0672fb44a111dfb6386022071725c5b15c9de584 xen/events: change to
>     using fasteoi
>     > 2789ef00cbe2cdb38deb30ee4085b88befadb1b0 xen: make pirq
>     interrupts use
>     > fasteoi
>     > d0936845a856816af2af48ddf019366be68e96ba xen/evtchn: rename
>     > enable/disable_dynirq -> unmask/mask_irq
>     > c6a16a778f86699b339585ba5b9197035d77c40f xen/evtchn: rename
>     > retrigger_dynirq -> irq
>     > f4526f9a78ffb3d3fc9f81636c5b0357fc1beccd xen/evtchn: make pirq
>     > enable/disable unmask/mask
>     > 43d8a5030a502074f3c4aafed4d6095ebd76067c xen/evtchn: pirq_eoi
>     does unmask
>     > cb23e8d58ca35b6f9e10e1ea5682bd61f2442ebd xen/evtchn: correction,
>     pirq
>     > hypercall does not unmask
>     > 2390c371ecd32d9f06e22871636185382bf70ab7 xen/events: use
>     > PHYSDEVOP_pirq_eoi_gmfn to get pirq need-EOI info
>     > 158d6550716687486000a828c601706b55322ad0 xen/pirq: use eoi as enable
>     > d2ea486300ca6e207ba178a425fbd023b8621bb1 xen/pirq: use fasteoi
>     for MSI too
>     > f0d4a0552f03b52027fb2c7958a1cbbe210cf418 xen/apic: fix
>     pirq_eoi_gmfn resume
>     >
>
>
>     _______________________________________________
>     Xen-devel mailing list
>     Xen-devel@xxxxxxxxxxxxxxxxxxx <mailto:Xen-devel@xxxxxxxxxxxxxxxxxxx>
>     http://lists.xensource.com/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.