[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [linux-4.1 test] 101737: regressions - FAIL



flight 101737 linux-4.1 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101737/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt    13 saverestore-support-check fail REGR. vs. 101401
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-check fail REGR. vs. 101401
 test-armhf-armhf-xl-multivcpu 15 guest-start/debian.repeat fail REGR. vs. 
101401
 test-armhf-armhf-libvirt-qcow2 12 saverestore-support-check fail in 101687 
REGR. vs. 101401

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail in 
101687 pass in 101737
 test-armhf-armhf-xl-credit2  11 guest-start                fail pass in 101687
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install        fail pass in 101687
 test-armhf-armhf-xl-cubietruck 11 guest-start              fail pass in 101715
 test-armhf-armhf-xl-xsm      11 guest-start                fail pass in 101715
 test-armhf-armhf-xl-rtds     11 guest-start                fail pass in 101715

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt-xsm 15 guest-start/debian.repeat fail blocked in 
101401
 test-armhf-armhf-xl-credit2 15 guest-start/debian.repeat fail in 101687 like 
101401
 test-armhf-armhf-xl-cubietruck 15 guest-start/debian.repeat fail in 101715 
like 101401
 test-armhf-armhf-xl-xsm 15 guest-start/debian.repeat fail in 101715 like 101401
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeat fail in 101715 like 
101401
 test-armhf-armhf-xl          15 guest-start/debian.repeat    fail  like 101401
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop             fail like 101401
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop             fail like 101401
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop            fail like 101401
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop            fail like 101401
 test-armhf-armhf-xl-vhd       9 debian-di-install            fail  like 101401

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumprun-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2 12 migrate-support-check fail in 101687 never pass
 test-armhf-armhf-xl-credit2 13 saverestore-support-check fail in 101687 never 
pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-check fail in 101687 never 
pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check fail in 101715 never 
pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check fail in 101715 
never pass
 test-armhf-armhf-xl-xsm     12 migrate-support-check fail in 101715 never pass
 test-armhf-armhf-xl-xsm 13 saverestore-support-check fail in 101715 never pass
 test-armhf-armhf-xl-rtds    12 migrate-support-check fail in 101715 never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-check fail in 101715 never pass
 build-i386-rumprun            7 xen-build                    fail   never pass
 build-amd64-rumprun           7 xen-build                    fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestore            fail  never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-install            fail   never pass

version targeted for testing:
 linux                9ca365c0c8bdd8552ec064f0e696600cf7ea66dd
baseline version:
 linux                91473db3a3257eacead8f4d84cf4bc96c447193f

Last test of basis   101401  2016-10-12 15:06:53 Z   16 days
Testing same since   101649  2016-10-24 18:21:35 Z    4 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christopher S. Hall <christopher.s.hall@xxxxxxxxx>
  Daniel J Blueman <daniel@xxxxxxxxx>
  Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
  Hugh Dickins <hughd@xxxxxxxxxx>
  John Stultz <john.stultz@xxxxxxxxxx>
  Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
  Mathias Nyman <mathias.nyman@xxxxxxxxxxxxxxx>
  Sasha Levin <alexander.levin@xxxxxxxxxxx>
  Thomas Gleixner <tglx@xxxxxxxxxxxxx>

jobs:
 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumprun                                          fail    
 build-i386-rumprun                                           fail    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvh-amd                                  fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumprun-amd64                               blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumprun-i386                                 blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvh-intel                                fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           pass    
 test-amd64-i386-xl-qemut-winxpsp3                            pass    
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass    
 test-amd64-i386-xl-qemuu-winxpsp3                            pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9ca365c0c8bdd8552ec064f0e696600cf7ea66dd
Author: Sasha Levin <alexander.levin@xxxxxxxxxxx>
Date:   Sat Oct 22 16:05:53 2016 -0400

    Linux 4.1.35
    
    Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxx>

commit 3b244a6e53dc33f6701bfde8601d7677c0c6e9e8
Author: Mathias Nyman <mathias.nyman@xxxxxxxxxxxxxxx>
Date:   Fri Dec 11 14:38:06 2015 +0200

    xhci: fix usb2 resume timing and races.
    
    [ Upstream commit f69115fdbc1ac0718e7d19ad3caa3da2ecfe1c96 ]
    
    According to USB 2 specs ports need to signal resume for at least 20ms,
    in practice even longer, before moving to U0 state.
    Both host and devices can initiate resume.
    
    On device initiated resume, a port status interrupt with the port in resume
    state in issued. The interrupt handler tags a resume_done[port]
    timestamp with current time + USB_RESUME_TIMEOUT, and kick roothub timer.
    Root hub timer requests for port status, finds the port in resume state,
    checks if resume_done[port] timestamp passed, and set port to U0 state.
    
    On host initiated resume, current code sets the port to resume state,
    sleep 20ms, and finally sets the port to U0 state. This should also
    be changed to work in a similar way as the device initiated resume, with
    timestamp tagging, but that is not yet tested and will be a separate
    fix later.
    
    There are a few issues with this approach
    
    1. A host initiated resume will also generate a resume event. The event
       handler will find the port in resume state, believe it's a device
       initiated resume, and act accordingly.
    
    2. A port status request might cut the resume signalling short if a
       get_port_status request is handled during the host resume signalling.
       The port will be found in resume state. The timestamp is not set leading
       to time_after_eq(jiffies, timestamp) returning true, as timestamp = 0.
       get_port_status will proceed with moving the port to U0.
    
    3. If an error, or anything else happens to the port during device
       initiated resume signalling it will leave all the device resume
       parameters hanging uncleared, preventing further suspend, returning
       -EBUSY, and cause the pm thread to busyloop trying to enter suspend.
    
    Fix this by using the existing resuming_ports bitfield to indicate that
    resume signalling timing is taken care of.
    Check if the resume_done[port] is set before using it for timestamp
    comparison, and also clear out any resume signalling related variables
    if port is not in U0 or Resume state
    
    This issue was discovered when a PM thread busylooped, trying to runtime
    suspend the xhci USB 2 roothub on a Dell XPS
    
    Cc: stable <stable@xxxxxxxxxxxxxxx>
    Reported-by: Daniel J Blueman <daniel@xxxxxxxxx>
    Tested-by: Daniel J Blueman <daniel@xxxxxxxxx>
    Signed-off-by: Mathias Nyman <mathias.nyman@xxxxxxxxxxxxxxx>
    Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxx>

commit c865f98df72112a3997b219bf711bc46c1e90706
Author: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Date:   Thu Oct 13 13:07:36 2016 -0700

    mm: remove gup_flags FOLL_WRITE games from __get_user_pages()
    
    [ Upstream commit 19be0eaffa3ac7d8eb6784ad9bdbc7d67ed8e619 ]
    
    This is an ancient bug that was actually attempted to be fixed once
    (badly) by me eleven years ago in commit 4ceb5db9757a ("Fix
    get_user_pages() race for write access") but that was then undone due to
    problems on s390 by commit f33ea7f404e5 ("fix get_user_pages bug").
    
    In the meantime, the s390 situation has long been fixed, and we can now
    fix it by checking the pte_dirty() bit properly (and do it better).  The
    s390 dirty bit was implemented in abf09bed3cce ("s390/mm: implement
    software dirty bits") which made it into v3.9.  Earlier kernels will
    have to look at the page state itself.
    
    Also, the VM has become more scalable, and what used a purely
    theoretical race back then has become easier to trigger.
    
    To fix it, we introduce a new internal FOLL_COW flag to mark the "yes,
    we already did a COW" rather than play racy games with FOLL_WRITE that
    is very fundamental, and then use the pte dirty flag to validate that
    the FOLL_COW flag is still valid.
    
    Reported-and-tested-by: Phil "not Paul" Oester <kernel@xxxxxxxxxxxx>
    Acked-by: Hugh Dickins <hughd@xxxxxxxxxx>
    Reviewed-by: Michal Hocko <mhocko@xxxxxxxx>
    Cc: Andy Lutomirski <luto@xxxxxxxxxx>
    Cc: Kees Cook <keescook@xxxxxxxxxxxx>
    Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
    Cc: Willy Tarreau <w@xxxxxx>
    Cc: Nick Piggin <npiggin@xxxxxxxxx>
    Cc: Greg Thelen <gthelen@xxxxxxxxxx>
    Cc: stable@xxxxxxxxxxxxxxx
    Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxx>

commit 4edf04a3a307195be345f8189e37501ac247f68b
Author: John Stultz <john.stultz@xxxxxxxxxx>
Date:   Tue Oct 4 19:55:48 2016 -0700

    timekeeping: Fix __ktime_get_fast_ns() regression
    
    [ Upstream commit 58bfea9532552d422bde7afa207e1a0f08dffa7d ]
    
    In commit 27727df240c7 ("Avoid taking lock in NMI path with
    CONFIG_DEBUG_TIMEKEEPING"), I changed the logic to open-code
    the timekeeping_get_ns() function, but I forgot to include
    the unit conversion from cycles to nanoseconds, breaking the
    function's output, which impacts users like perf.
    
    This results in bogus perf timestamps like:
     swapper     0 [000]   253.427536:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   254.426573:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   254.426687:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   254.426800:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   254.426905:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   254.427022:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   254.427127:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   254.427239:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   254.427346:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   254.427463:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]   255.426572:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
    
    Instead of more reasonable expected timestamps like:
     swapper     0 [000]    39.953768:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    40.064839:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    40.175956:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    40.287103:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    40.398217:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    40.509324:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    40.620437:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    40.731546:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    40.842654:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    40.953772:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
     swapper     0 [000]    41.064881:  111111111 cpu-clock:  ffffffff810a0de6 
native_safe_halt+0x6 ([kernel.kallsyms])
    
    Add the proper use of timekeeping_delta_to_ns() to convert
    the cycle delta to nanoseconds as needed.
    
    Thanks to Brendan and Alexei for finding this quickly after
    the v4.8 release. Unfortunately the problematic commit has
    landed in some -stable trees so they'll need this fix as
    well.
    
    Many apologies for this mistake. I'll be looking to add a
    perf-clock sanity test to the kselftest timers tests soon.
    
    Fixes: 27727df240c7 "timekeeping: Avoid taking lock in NMI path with 
CONFIG_DEBUG_TIMEKEEPING"
    Reported-by: Brendan Gregg <bgregg@xxxxxxxxxxx>
    Reported-by: Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx>
    Tested-and-reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
    Signed-off-by: John Stultz <john.stultz@xxxxxxxxxx>
    Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
    Cc: stable <stable@xxxxxxxxxxxxxxx>
    Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
    Link: 
http://lkml.kernel.org/r/1475636148-26539-1-git-send-email-john.stultz@xxxxxxxxxx
    Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxx>

commit 486eeb6d6755a573829e6c5438eb7aae18891bb1
Author: Christopher S. Hall <christopher.s.hall@xxxxxxxxx>
Date:   Mon Feb 22 03:15:19 2016 -0800

    time: Add cycles to nanoseconds translation
    
    [ Upstream commit 6bd58f09e1d8cc6c50a824c00bf0d617919986a1 ]
    
    The timekeeping code does not currently provide a way to translate
    externally provided clocksource cycles to system time. The cycle count
    is always provided by the result clocksource read() method internal to
    the timekeeping code. The added function timekeeping_cycles_to_ns()
    calculated a nanosecond value from a cycle count that can be added to
    tk_read_base.base value yielding the current system time. This allows
    clocksource cycle values external to the timekeeping code to provide a
    cycle count that can be transformed to system time.
    
    Cc: Prarit Bhargava <prarit@xxxxxxxxxx>
    Cc: Richard Cochran <richardcochran@xxxxxxxxx>
    Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
    Cc: Ingo Molnar <mingo@xxxxxxxxxx>
    Cc: Andy Lutomirski <luto@xxxxxxxxxxxxxx>
    Cc: kevin.b.stanton@xxxxxxxxx
    Cc: kevin.j.clarke@xxxxxxxxx
    Cc: hpa@xxxxxxxxx
    Cc: jeffrey.t.kirsher@xxxxxxxxx
    Cc: netdev@xxxxxxxxxxxxxxx
    Reviewed-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
    Signed-off-by: Christopher S. Hall <christopher.s.hall@xxxxxxxxx>
    Signed-off-by: John Stultz <john.stultz@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.