| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
 Re: [Xen-devel] [PATCH v5 00/16] x86/hvm: I/O emulation cleanup and fix
 
 | Il 30/06/2015 16:48, Fabio Fantoni ha
      scritto:
 Il
      30/06/2015 15:05, Paul Durrant ha scritto:
      
 This patch series re-works much of the
        code involved in emulation of port
        and memory mapped I/O for HVM guests.
 
 The code has become very convoluted and, at least by inspection,
        certain
 emulations will apparently malfunction.
 
 The series is broken down into 16 patches (which are also
        available in
 my xenbits repo:
        http://xenbits.xen.org/gitweb/?p=people/pauldu/xen.git
 on the emulation32 branch).
 
 Previous changelog
 ------------------
 
 v2:
 - Removed bogus assertion from patch #15
 - Re-worked patch #17 after basic testing of back-port onto
        XenServer
 
 v3:
 - Addressed comments from Jan
 - Re-ordered series to bring a couple of more trivial patches
        to the
 front
 - Backport to XenServer (4.5) now passing automated tests
 - Tested on unstable with QEMU upstream and trad, with and
        without
 HAP (to force shadow emulation)
 
 v4:
 - Removed previous patch (make sure translated MMIO reads or
 writes fall within a page) and rebased rest of series.
 - Address Jan's comments on patch #1
 
 Changelog (now per-patch)
 -------------------------
 
 0001-x86-hvm-make-sure-emulation-is-retried-if-domain-is-.patch
 
 This is a fix for an issue on staging reported by Don Slutz
 
 0002-x86-hvm-remove-multiple-open-coded-chunking-loops.patch
 
 v5: Addressed further comments from Jan
 
 0003-x86-hvm-change-hvm_mmio_read_t-and-hvm_mmio_write_t-.patch
 
 v5: New patch to tidy up types
 
 0004-x86-hvm-restrict-port-numbers-to-uint16_t-and-sizes-.patch
 
 v5: New patch to tidy up more types
 
 0005-x86-hvm-unify-internal-portio-and-mmio-intercepts.patch
 
 v5: Addressed further comments from Jan and simplified
        implementation
 by passing ioreq_t to accept() function
 
 0006-x86-hvm-add-length-to-mmio-check-op.patch
 
 v5: Simplified by leaving mmio_check() implementation alone and
 calling to check last byte if first-byte check passes
 
 0007-x86-hvm-unify-dpci-portio-intercept-with-standard-po.patch
 
 v5: Addressed further comments from Jan
 
 0008-x86-hvm-unify-stdvga-mmio-intercept-with-standard-mm.patch
 
 v5: Fixed semantic problems pointed out by Jan
 
 0009-x86-hvm-limit-reps-to-avoid-the-need-to-handle-retry.patch
 
 v5: Addressed further comments from Jan
 
 0010-x86-hvm-only-call-hvm_io_assist-from-hvm_wait_for_io.patch
 
 v5: Added Jan's acked-by
 
 0011-x86-hvm-split-I-O-completion-handling-from-state-mod.patch
 
 v5: Confirmed call to msix_write_completion() is in the correct
        place.
 
 0012-x86-hvm-remove-HVMIO_dispatched-I-O-state.patch
 
 v5: Added some extra comments to the commit
 
 0013-x86-hvm-remove-hvm_io_state-enumeration.patch
 
 v5: Added Jan's acked-by
 
 0014-x86-hvm-use-ioreq_t-to-track-in-flight-state.patch
 
 v5: Added missing hunk with call to handle_pio()
 
 0015-x86-hvm-always-re-emulate-I-O-from-a-buffer.patch
 
 v5: Added Jan's acked-by
 
 0016-x86-hvm-track-large-memory-mapped-accesses-by-buffer.patch
 
 v5: Fixed to cache up three distict I/O emulations per
        instruction
 
 Testing
 -------
 
 The series was been back-ported to staging-4.5 and then dropped
        onto the
 XenServer (Dundee) patch queue. All automated branch-safety
        tests pass.
 
 The series as-is has been manually tested with a Windows 7
        (32-bit) VM
 using upstream QEMU.
 
 
 
 Thanks for your work.
 I did some very fast tests, no regression found, on my linux domUs
      qxl is still not working and I'm unable to debug it.
 @Jim Fehlig: can you try if qxl is still working at least on suse
      dom0/domU after this serie please?
 
 Can someone tell me how to debug the qxl problem now that qemu
      don't crash anymore but remain at 100% cpu without nothing about
      in dom0 logs (if exists) please?
 
 Thanks for any reply and sorry for my bad english.
 
 I don't have knowledge about x86
        emulation but I'm trying desperately
        to find the cause of such problems
        which persists despite this serie of patches.
 In latest xengt xen patches I saw this patch: "vgt:
    add support of emulating SSE2 instruction MOVD"
 https://github.com/01org/XenGT-Preview-xen/commit/f2bad31f80f698a452c37cb39841da8e4f69350f
 This xengt patch is still based on xen 4.5 instead, and
    x86_emulate.c is different but I noticed some strange things looking
    at it and also comparing it with upstream staging...
 
 xengt add this:
 
 case 0x7e: /* movd
          xmm,mm/mm32 */this seems still missing in upstream staging, should it be added?
    Should I try the patch with latest upstream xen or does it need
    other changes? There is also this:
 
 ea.bytes = (b ==
          0x7e ? 4
          : 16);An sse2 istruction of 4 byte seems strange to me, is it right? 
 
 In upstream I saw this:
 
 case 0xe7: /* movntq mm,m64 *//* {,v}movntdq xmm,m128 */
 /* vmovntdq ymm,m256 */
 fail_if(ea.type != OP_MEM);
 fail_if(vex.pfx == vex_f3);
 /* fall through */
 In the last qemu crash I had with qxl on linux I got this:
 
 
      #0  __memset_sse2 () at ../sysdeps/x86_64/multiarch/../memset.S:908
Latest istruction:
=> 0x7ffff3713f7b <__memset_sse2+2363>:    movntdq %xmm0,(%rdi) Is it possible that the "fail_if(ea.type != OP_MEM);" or the other
    one make a difference?
 
 
 
 After applying this patch serie qemu doesn't crash anymore but it
    remains at 100% cpu and is unusable, I can do only xl destroy, I
    find nothing in logs and I'm unable to debug it.
 Is there something I can do to debug this?
 
 I also took a fast look at suse kernel patches
    (https://github.com/openSUSE/kernel-source/tree/SLE12-SP1) where qxl
    is also working on linux domUs (other things seems already similar
    based on what Jim Fehlig told me) but I didn't find a possible
    fix/workaround for it to try. Can someone tell me about possible
    patches I should try please?
 
 Any help to find the workaround or fix to apply upstream and have
    linux domUs working with qxl in every case is appreciated.
 
 
 | 
 _______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
 
 |