[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen 4.12 DomU hang / freeze / stall under high network/disk load


  • To: Glen <glenbarney@xxxxxxxxx>, Xen-users <xen-users@xxxxxxxxxxxxxxxxxxxx>
  • From: Sarah Newman <srn@xxxxxxxxx>
  • Date: Thu, 13 Feb 2020 16:28:47 -0800
  • Delivery-date: Fri, 14 Feb 2020 00:30:16 +0000
  • Dkim-filter: OpenDKIM Filter v2.11.0 mail.prgmr.com 0770772012A
  • List-id: Xen user discussion <xen-users.lists.xenproject.org>

On 2/13/20 1:07 PM, Glen wrote:
Dear Xen Team:

Since upgrading to Xen 4.12, I'm experiencing an ongoing problem with
stalled guests.  I had previously thought I was the only one with this
problem, and had reported this to my distro's virtualization team (
see:  https://lists.opensuse.org/opensuse-virtual/2019-12/msg00000.html
  and  https://lists.opensuse.org/opensuse-virtual/2019-12/msg00003.html
for thread heads ), but although they tried to help, we all kind of
concluded (wrongly) that I must just have had a bad guest.

I finally recreated a new host and a new guest, clean, from scratch,
thinking that would solve the problem, and it didn't.  That led me to
search again, and I now see that another individual (who doubtless
thought HE was the only one) has reported this issue to your list  (
https://lists.xenproject.org/archives/html/xen-users/2020-02/msg00015.html
et al).

I'm not one of the Xen developers, but I wouldn't necessarily assume the same 
root cause from the information you've provided so far.

<snip>
Circumstances when the problem first occurred:
1. All hosts and guests were previously on Xen 4.9.4 (OpenSuse 42.3,
Linux 4.4.180, Xen 4.9.4)
2. I upgraded one physical host to Xen 4.12.1 (OpenSuse 15.1, Linux
4.12.14, Xen 4.12.1).
3. The guest(s) on that host started malfunctioning at that point.

If you can, try Xen 4.9.4 with Linux 4.12.14 (or Xen 4.12.1 with Linux 4.4.180.)

That will help isolate the issue to either Xen or the Linux kernel.

<snip>

> 4. Tried to use host xl interface to unplug/replug network bridges.
> This appeared to work from host side, but guest was unaffected.

Do you mean 'xl network-detach' and 'xl network-attach'? If not, please give 
example commands.

<snip>

Steps to reproduce:
1. Get a server.  I'm using a Dell PowerEdge R720, but this has
happened on several different Dell models.  My current server has two
16-core CPUs, and 128GB of RAM.

What CPUs are these? Can you dump the information from one of the cpus in /proc/cpuinfo so we can see what microcode version you have, in the highly unlikely case this information is pertinent?

2. Load Xen 4.12.1 (OpenSuse 15.1/Xen 4.12.1) on the server.  Boot it
up in Xen Dom0/host mode.
What about attaching the output of 'xl dmesg' - both the initial boot messages 
and anything that comes from running the specific domU?

7. From that other machine, start pounding the guest.  An rsync of the
entire data partition is a great way to trigger this.  If I run
several outbound rsyncs together, I can crash my guest in under 48
hours.  If I run 4 or 5, I can often crash the guest in just 2 hours.
If you don't want to damage your SSDs on your other machine, here's my
current command (my host is 192.168.1.10, and my guest is
192.168.1.11, so I plug in some other machine and make it, say,
192.168.1.12, and then run:

nohup ssh 192.168.1.11 tar cf - --one-file-system /a | cat > /dev/null &

How about trying iperf or iperf3 with either only transmit or receive? iperf is 
specifically designed to use maximal bandwidth and doesn't use disk.

http://fasterdata.es.net/performance-testing/network-troubleshooting-tools/throughput-tool-comparision/

For independently load-testing disk, you can try dd or fio, while being cognizant of the disk cache. To avoid actual disk I/O I think you should be able to use a ram based disk in the dom0 instead of a physical disk. However, I wouldn't bother if you can reproduce with network only, until the network issue has been fixed.

Administrivia:
OS: OpenSuse 15.1
Linux: 4.12.14-lp151.28.36
Xen: 4.12.1
Dom0 boot parameters: dom0_mem=4G dom0_max_vcpus=4 dom0_vcpus_pin
gnttab_max_frames=256
Xen guest config:

name="guest1"
description="guest1"
memory=90112
maxmem=90112
vcpus=26

This is fairly large.

Have you tried both fewer cpus and less memory? If you can reproduce with iperf, which probably will reproduce more quickly, can you reproduce with memory=2048 and vcpus=1 or vcpus=2 for example? FYI the domU might not boot at all with vcpus=1 with some kernel versions.

But I would try that only if none of the network changes show a difference.

cpus="4-31"
on_poweroff="destroy"
on_reboot="restart"
on_crash="restart"
on_watchdog="restart"
localtime=0
keymap="en-us"
type="pv"
kernel="/usr/lib/grub2/x86_64-xen/grub.xen"
extra="elevator=noop"
disk=[
         '/xen/guest1/guest1.root,raw,xvda1,w',
         '/xen/guest1/guest1.swap,raw,xvda2,w',
         '/xen/guest1/guest1.xa,raw,xvda3,w',
         ]
vif=[
         'rate=100Mb/s,mac=00:16:3f:49:4a:41,bridge=br0',

You probably want try removing the vif rate limit. Using rate=... I got soft lockups on the dom0 many kernel versions ago. I don't know what happens if the soft lockups in the dom0 have been fixed - perhaps another problem remains in the domU.

If removing "rate" fixes it, switch to rate limiting with another method - 
possibly 'tc' but there might be something better available now using BPF.

Also, have you tried at all looking at or changing the offload settings in the dom0 and/or domU with "/sbin/ethtool -k/-K <device>" ? I don't think this is actually the issue. But it's been a source of problems historically and it's easy to try.

I am looking for a means on Xen to bug report this; so far, I haven't
found it, but I will keep looking.

https://wiki.xen.org/wiki/Reporting_Bugs_against_Xen_Project

But try some more data collection and debugging first, ideally by changing one 
thing at a time.

> Meanwhile, I'm hoping that these details and history spark something
> for some of you here.  Do any of you have any ideas on this?  Any
> thoughts, guidance, musings, etc., anything at all would be
> appreciated.

x-ref https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html

I don't get the impression you've tried using sysrq already, since you did not mention it by name. If you have tried sysrq, it would be helpful if you could go back through your original email and add examples of all of the commands you've run.

For PV, to send the sysrq you can try 'xl sysrq <domU> <key>' or 'ctrl-o <key>' on the virtual serial console. Neither will probably work for HVM. I can't figure how to send a break on the virtual serial console right now for HVM. You can also use /proc/sysrq-trigger in the domU to send a key if the domU minimally responds.

When the domU locks up, you *might* get interesting information from the 'x' and 'l' sysrq commands within the domU, You may need to enable that functionality with 'sysctl -w kernel.sysrq=1' .

I'm not sure the 'l' commands works for PV at all. 'l' works for HVM.

If you can send a sysrq when the domU is not locked up, but can't send one when 
it's locked up, that's also potentially interesting.

There's a lot of debug information available from the Xen hypervisor too, but I'm not 100% sure which of that is interesting and some of it is fairly intrusive to collect.

--Sarah


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.