[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] FW: Cirrus VGA slow screen update, show blank screen last 13s or so for windows XP guest



> -----Original Message-----
> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of George
> Dunlap
> Sent: Monday, August 05, 2013 10:28 PM
> To: Gonglei (Arei)
> Cc: xen-devel@xxxxxxxxxxxxx; qemu-devel@xxxxxxxxxx; Anthony PERARD;
> Stefano Stabellini
> Subject: Re: [Xen-devel] FW: Cirrus VGA slow screen update, show blank screen
> last 13s or so for windows XP guest
> 
> On Mon, Aug 5, 2013 at 2:10 PM, Gonglei (Arei) <arei.gonglei@xxxxxxxxxx>
> wrote:
> > Hi,
> >> -----Original Message-----
> >> From: Gonglei (Arei)
> >> Sent: Tuesday, July 30, 2013 10:01 AM
> >> To: 'Pasi KÃrkkÃinen'
> >> Cc: Gerd Hoffmann; Andreas FÃrber; Hanweidong; Luonengjun;
> >> qemu-devel@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxx; Anthony Liguori;
> >> Huangweidong (Hardware); 'Ian.Jackson@xxxxxxxxxxxxx'; Anthony Liguori;
> >> 'aliguori@xxxxxxxxxx'
> >> Subject: RE: [Xen-devel] [Qemu-devel] Cirrus VGA slow screen update, show
> >> blank screen last 13s or so for windows XP guest
> >>
> >> > On Mon, Jul 29, 2013 at 08:48:54AM +0000, Gonglei (Arei) wrote:
> >> > > > -----Original Message-----
> >> > > > From: Pasi KÃrkkÃinen [mailto:pasik@xxxxxx]
> >> > > > Sent: Saturday, July 27, 2013 7:51 PM
> >> > > > To: Gerd Hoffmann
> >> > > > Cc: Andreas FÃrber; Hanweidong; Luonengjun;
> qemu-devel@xxxxxxxxxx;
> >> > > > xen-devel@xxxxxxxxxxxxx; Gonglei (Arei); Anthony Liguori;
> Huangweidong
> >> > > > (Hardware)
> >> > > > Subject: Re: [Xen-devel] [Qemu-devel] Cirrus VGA slow screen update,
> >> show
> >> > > > blank screen last 13s or so for windows XP guest
> >> > > >
> >> > > > On Fri, Jul 26, 2013 at 12:19:16PM +0200, Gerd Hoffmann wrote:
> >> > > > >
> >> > > > > Maybe the xen guys did some optimizations in qemu-dm which where
> >> not
> >> > > > > merged upstream.  Try asking @ xen-devel.
> >> > > > >
> >> > > >
> >> > > > Yeah, xen qemu-dm must have some optimization for cirrus that isn't 
> >> > > > in
> >> > > > upstream qemu.
> >> > >
> >> > > Hi, Pasi. Would you give me some more details? Thanks!
> >> > >
> >> >
> >> > Unfortunately I don't have any more details.. I was just assuming if xen
> >> > qemu-dm is fast,
> >> > but upstream qemu is slow, there might be some optimization in xen
> >> > qemu-dm ..
> >> >
> >> > Did you check the xen qemu-dm (traditional) history / commits?
> >> >
> >> > -- Pasi
> >>
> >> Yes, I did. I tried to reproduce the issue with qemu-dm by fall backing 
> >> history
> >> commit.
> >> But the qemu-dm mainline merged other branches, such as branch
> 'upstream'
> >> and branch 'qemu',
> >> so that I can not complied the xen-4.1.2/tools project well. CC'ing Ian and
> >> Anthony.
> >>
> > By analyzing xentrace data, I found that there is lot's of VMEXIT between
> memory region 0xa0000~0xaffff with upstream qemu,
> > but only 256 VMEXIT on traditional qemu-dm, when Windows XP guest
> booting up or change the method connecting the VM from VNC(RDP) to
> RDP(VNC):
> >
> > linux-sAGhxH:# cat qemu-upstrem.log |grep 0x00000000000a|wc -l
> > 640654
> > linux-sAGhxH:# cat qemu-dm.log |grep 0x00000000000a|wc -l
> > 256
> > And the 256 VMEXIT of qemu-dm as the same as the top 256 VMEXIT of
> qemu-upstream
> >
> > linux-sAGhxH:# cat qemu-upstream.log |grep 0x00000000000a| tail
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b08 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b0a mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b0c mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b0e mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b10 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b12 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b14 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b16 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b18 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b1a mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > linux-sAGhxH:# cat qemu-dm.log |grep 0x00000000000a| tail
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1ec0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1ee0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f00 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f20 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f40 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f60 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f80 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1fa0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1fc0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> > CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1fe0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> >
> > Please see attachment for more information.
> >
> > Because of the lot's of VMEXIT, the cirrus_vga_ioport_write funciton receive
> little scheduling,
> > and cirrus_vga_write_gr execution interval of more than 32ms twice per, so
> that the blank screen time over 13 second with upstream qemu on Xen.
> > I don't know why the Windows XP guest access the memory region
> 0xa0000~0xaffff with upstream qemu but with traditional qemu-dm is not,
> > Anyone can give me some suggestion?
> 
> Anthony Perard is probably the best person to answer this question,
> but unfortunately he's on holiday at the moment.
> 
> It might be interesting to see what xenalyze tells you about what it
> sees in the trace:
> 
> http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze
> /
> 
> I think you'll want to use "--summary --with-mmio-enumeration".
> 
>  -George

Hi, George. I analyze the xentrace data by xenalyze, and get the next results, 
actually I don't understand the meaning:

linux-sAGhxH: # xenalyze qemu-upstream.log --summary --with-mmio-enumeration
Using VMX hardware-assisted virtualization.
scan_for_new_pcpu: Activating pcpu 0 at offset 0
Creating vcpu 0 for dom 32768
scan_for_new_pcpu: Activating pcpu 1 at offset 4180
Creating vcpu 1 for dom 32768
init_pcpus: through first trace write, done for now.
hvm_generic_postprocess: Strange, exit 2c(APIC_ACCESS) missing a handler
WARNING: Not enumerationg MMIO in VGA range.  Use --mmio-enumeration-skip-vga=0 
to override.
hvm_generic_postprocess: Strange, exit 0(EXCEPTION_NMI) missing a handler
hvm_generic_postprocess: HVM evt 0 in 2c and 0!
read_record: read returned zero, deactivating pcpu 1
deactivate_pcpu: setting d32768v1 to state LOST
deactivate_pcpu: Setting max_active_pcpu to 0
read_record: read returned zero, deactivating pcpu 0
deactivate_pcpu: setting d32768v0 to state LOST
deactivate_pcpu: Setting max_active_pcpu to -1
Total time: 4.71 seconds (using cpu speed 2.40 GHz)
--- Log volume summary ---
 - cpu 0 -
 gen   :       3292
 hvm   :   47935940
 +-vmentry:    8055696
 +-vmexit :   13426160
 +-handler:   26454084
 - cpu 1 -
 gen   :         12
 hvm   :     574408
 +-vmentry:     158424
 +-vmexit :     264040
 +-handler:     151944

linux-sAGhxH: # xenalyze qemu-dm.log --summary --with-mmio-enumeration 
Using VMX hardware-assisted virtualization.
scan_for_new_pcpu: Activating pcpu 2 at offset 0
Creating vcpu 2 for dom 32768
scan_for_new_pcpu: Activating pcpu 3 at offset 6012
Creating vcpu 3 for dom 32768
init_pcpus: through first trace write, done for now.
hvm_generic_postprocess: Strange, exit 2c(APIC_ACCESS) missing a handler
WARNING: Not enumerationg MMIO in VGA range.  Use --mmio-enumeration-skip-vga=0 
to override.
hvm_generic_postprocess: Strange, exit 0(EXCEPTION_NMI) missing a handler
hvm_generic_postprocess: HVM evt 0 in 2c and 0!
read_record: read returned zero, deactivating pcpu 3
deactivate_pcpu: setting d32768v3 to state LOST
deactivate_pcpu: Setting max_active_pcpu to 2
read_record: read returned zero, deactivating pcpu 2
deactivate_pcpu: setting d32768v2 to state LOST
deactivate_pcpu: Setting max_active_pcpu to -1
Total time: 4.71 seconds (using cpu speed 2.40 GHz)
--- Log volume summary ---
 - cpu 2 -
 gen   :        180
 hvm   :    4139172
 +-vmentry:    1030356
 +-vmexit :    1717280
 +-handler:    1391536
 - cpu 3 -
 gen   :        148
 hvm   :    2452600
 +-vmentry:     676188
 +-vmexit :    1126980
 +-handler:     649432

-Gonglei
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.