[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/17] x86/hvm: I/O emulation cleanup and fix



> -----Original Message-----
> From: Fabio Fantoni [mailto:fabio.fantoni@xxxxxxx]
> Sent: 10 June 2015 15:50
> To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Andrew Cooper; Keir (Xen.org); Jan Beulich
> Subject: Re: [Xen-devel] [PATCH 00/17] x86/hvm: I/O emulation cleanup and
> fix
> 
> Il 10/06/2015 11:13, Fabio Fantoni ha scritto:
> > Il 09/06/2015 17:21, Paul Durrant ha scritto:
> >>> -----Original Message-----
> >>> From: Fabio Fantoni [mailto:fabio.fantoni@xxxxxxx]
> >>> Sent: 09 June 2015 15:44
> >>> To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx
> >>> Subject: Re: [Xen-devel] [PATCH 00/17] x86/hvm: I/O emulation
> >>> cleanup and
> >>> fix
> >>>
> >>> Il 08/06/2015 16:33, Paul Durrant ha scritto:
> >>>> This patch series re-works much of the code involved in emulation
> >>>> of port
> >>>> and memory mapped I/O for HVM guests.
> >>>>
> >>>> The code has become very convoluted and, at least by inspection,
> >>>> certain
> >>>> emulations will apparently malfunction.
> >>>>
> >>>> The series is broken down into 17 patches (which are also available in
> >>>> my xenbits repo:
> >>> http://xenbits.xen.org/gitweb/?p=people/pauldu/xen.git
> >>>> on the emulation18 branch) as follows:
> >>> Big thanks for your work.
> >>> I tested them taking the patches from emulation18 branch.
> >>> I tried xl create of a window 7 domU but during xl create dom0
> >>> istant-reboot and there isn't anything about the problem in the logs
> >>> (kern.log, syslog, qemu log, xl log ecc...)
> >>> I tried also with linux (hvm) domU and with stdvga instead qxl I had
> >>> setted but same result.
> >>>
> >>> For debug the problem I must enable all xen debug in grub entry and
> >>> redirect output to serial, similar to debug of dom0 boot problem or I
> >>> must do different thing?
> >>>
> >> Having serial is very useful in these cases. I've been debugging with
> >> a 32-bit Win 7 VM so it would be useful if you could bisect bit.
> >> There are natural boundaries in the series you could try first:
> >>
> >> Apply everything up to and including 'x86/hvm: unify stdvga mmio
> >> intercept with standard mmio intercept'
> >> Then try everything up to and including ' x86/hvm: remove extraneous
> >> parameter from hvmtrace_io_assist()'
> >> Then, if you get that far, try the rest one at a time.
> >>
> >>    Paul
> 
> Found the patch that cause dom0 insta-reboot:
>   x86/hvm: remove multiple open coded 'chunking' loops in
> hvmemul_read/write()
> 

I just ran into that too. There's a bogus assertion in there. I thought I was 
running with assertions on but clearly I wasn't.

I also found another couple of issues having backported the patches onto a 
recent XenServer. I'll post a v2 of the series once I've incorporated all the 
fixes.

  Paul

> In attachments also xl -vvv create output until the crash if can be
> useful, I not found useful information in logs after reboot :(
> 
> If you need more informations/tests tell me and I'll post them.
> 
> >
> > Thanks for reply.
> > I tried to boot with full debug enabled and with serial on lan but
> > server insta-reboot without show error or useful output on redirected
> > output or monitor.
> > I used same parameters of other debug I did long time ago if I
> > remember good, here the grub2 entry:
> >> menuentry 'Wheezy con Linux 3.16.0-0.bpo.4-amd64 e XEN - RAID -
> Debug
> >> su Seriale' --class debian --class gnu-linux --class gnu --class os {
> >>     set root='(RAID-ROOT)'
> >>     echo    'Caricamento Hypervisor Xen...'
> >>     multiboot    /boot/xen.gz placeholder dom0_mem=2G,max:3G
> >> swiotlb=65762 loglvl=all guest_loglvl=all sync_console
> >> console_to_ring com2=19200,8n1 console=com2
> >>     echo    'Caricamento Linux 3.16.0-0.bpo.4-amd64...'
> >>     linux    /boot/vmlinuz-3.16.0-0.bpo.4-amd64 placeholder
> >> root=/dev/mapper/RAID-ROOT ro console=hvc0 earlyprintk=xen
> nomodeset
> >>     echo    'Caricamento ramdisk iniziale...'
> >>     initrd    /boot/initrd.img-3.16.0-0.bpo.4-amd64
> >> }
> > Here the entry without debug working:
> >>
> >> menuentry 'RAID - Debian 7.8 (wheezy) con Linux 3.16.0-0.bpo.4-amd64
> >> e XEN 4.6-unstable' --class debian --class gnu-linux --class gnu
> >> --class os --class xen {
> >>     set fallback="1"
> >>     set root='(RAID-ROOT)'
> >>     echo    'Caricamento Hypervisor Xen 4.6-unstable...'
> >>     multiboot    /boot/xen.gz placeholder dom0_mem=2G,max:3G
> >>     echo    'Caricamento Linux 3.16.0-0.bpo.4-amd64 ...'
> >>     module    /boot/vmlinuz-3.16.0-0.bpo.4-amd64 placeholder
> >> root=/dev/mapper/RAID-ROOT ro swiotlb=65762 quiet
> >>     echo    'Caricamento ramdisk iniziale...'
> >>     module    /boot/initrd.img-3.16.0-0.bpo.4-amd64
> >> }
> > I did something wrong?
> >
> > Now I'll try to bisect the patch serie following your advice.
> >
> >>
> >>> Thanks for any reply and sorry for my bad english.
> >>>
> >>>> x86/hvm: simplify hvmemul_do_io()
> >>>> x86/hvm: re-name struct hvm_mmio_handler to hvm_mmio_ops
> >>>> x86/hvm: unify internal portio and mmio intercepts
> >>>> x86/hvm: unify dpci portio intercept with standard portio intercept
> >>>> x86/hvm: unify stdvga mmio intercept with standard mmio intercept
> >>>> x86/hvm: revert 82ed8716b "fix direct PCI port I/O emulation retry...
> >>>> x86/hvm: only call hvm_io_assist() from hvm_wait_for_io()
> >>>> x86/hvm: split I/O completion handling from state model
> >>>> x86/hvm: remove hvm_io_pending() check in hvmemul_do_io()
> >>>> x86/hvm: remove HVMIO_dispatched I/O state
> >>>> x86/hvm: remove hvm_io_state enumeration
> >>>> x86/hvm: use ioreq_t to track in-flight state
> >>>> x86/hvm: only acquire RAM pages for emulation when we need to
> >>>> x86/hvm: remove extraneous parameter from hvmtrace_io_assist()
> >>>> x86/hvm: make sure translated MMIO reads or writes fall within a page
> >>>> x86/hvm: remove multiple open coded 'chunking' loops
> >>>> x86/hvm: track large memory mapped accesses by linear address
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> Xen-devel mailing list
> >>>> Xen-devel@xxxxxxxxxxxxx
> >>>> http://lists.xen.org/xen-devel
> >


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.