[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Bad performance with Xen
If you ran the test on the Dom0 xen-ified
environment, the result is good.
You can perform a cross-check test and run the same command with
the server booted in normal environment (not with xen kernel
enabled)
If you get the same result, it means you experience no HDD I/O performance loss and that my
issue is related to something else.
Please note I'm not using XCP, but pure xen distro packages on
Debian Buster.
g
On 04/05/20 12:48, Olivier Lambert
wrote:
I ran your test and I have 235 MB/s on a old
Samsung SATA SSD (still on XCP-ng 8.1 and a Debian VM on top).
This is the command:
# dd bs=512 count=4194304 if=/dev/zero of=test
conv=fdatasync
It creates a zero-filled file called "test" in the
directory where the command is executed.
Hope it helps
g
On 04/05/20 11:50, Olivier Lambert wrote:
Hi!
Can you share your exact benchmark command so I can
test it on my end?
Hi guys. Maybe we are suffering
some related issue. If not, feel free to ignore
this message.
I wrote on this list but none replyed:
"Fresh
installed server with Debian Buster on top of
nvme swRaid1 (mdadm)
Testing hdd write seed with dd (with
convert=fdatasync option) gives me the result of
330MB/s. Good.
Installed xen-system and xen-tools (with
--no-recommends option in apt) from official
repository. Rebooted the system.
Re-tested hdd write seed with dd (with
convert=fdatasync option) gives me the result
of 108MB/s. Not good at all.
Maybe the following is not related to the
issue, but on dmesg there is a line when I
boot the system with Xen kernel:
...
[ 14.214044] Performance Events: unsupported
p6 CPU model 158 no PMU driver, software
events only.
...
Instead, when I boot the system without Xen
kernel I have these lines in dmesg:
...
[ 0.517217] Performance Events: PEBS fmt3+,
Skylake events, 32-deep LBR, full-width
counters, Intel PMU driver.
[ 0.517356] ... version: 4
[ 0.517444] ... bit width: 48
[ 0.517444] ... generic registers: 4
[ 0.517444] ... value mask:
0000ffffffffffff
[ 0.517444] ... max period:
00007fffffffffff
[ 0.517444] ... fixed-purpose events: 3
[ 0.517444] ... event mask:
000000070000000f
"
Personally, I moved to KVM+libvirt nearly
without rework.
I/O performance are great.
But I love XEN and I will be pleased to come
back to it.
g
On 03/05/20 19:24, Agustin Lopez wrote:
Sorry. I booted with 8 GB for the Dom0 and all
is the same.
I have seen one difference between the 2 xl
info:
(AGUSTIN) virt_caps : hvm
hvm_directio
(OLIVIER) virt_caps : pv hvm
hvm_directio pv_directio hap shadow
iommu_hap_pt_share
Could this be the problem?
Agustín
El 3/5/20 a las 18:50, Rob Townley escribió:
Agustin, noticed
‘ dom0_mem=2048M,max:4065M’,
so increasing
RAM allocated to Dom0 might speed up the
VMs.
2GB for dom0 is
extremely low in my opinion especially
when most of the 256GB of host RAM is
going to waste.
dom0_mem=2048M,max:4065M
Hard to tell. Here is my xl info to
compare:
host :
xcp-ng-lab-3
release : 4.19.0+1
version : #1 SMP Thu
Feb 13 17:34:28 CET 2020
machine : x86_64
nr_cpus : 4
max_cpu_id : 3
nr_nodes : 1
cores_per_socket : 4
threads_per_core : 1
cpu_mhz : 3312.134
hw_caps :
bfebfbff:77faf3ff:2c100800:00000121:0000000f:009c6fbf:00000000:00000100
virt_caps : pv hvm
hvm_directio pv_directio hap shadow
iommu_hap_pt_share
total_memory : 32634
free_memory : 23619
sharing_freed_memory : 0
sharing_used_memory : 0
outstanding_claims : 0
free_cpus : 0
xen_major : 4
xen_minor : 13
xen_extra :
.0-8.4.xcpng8.1
xen_version :
4.13.0-8.4.xcpng8.1
xen_caps :
xen-3.0-x86_64 xen-3.0-x86_32p
hvm-3.0-x86_32 hvm-3.0-x86_32p
hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params :
virt_start=0xffff800000000000
xen_changeset : 85e1424de2dd,
pq f9dbf852550e
xen_commandline : watchdog
ucode=scan dom0_max_vcpus=1-4
crashkernel=256M,below=4G console=vga
vga=mode-0x0311
dom0_mem=8192M,max:8192M
cc_compiler : gcc (GCC)
4.8.5 20150623 (Red Hat 4.8.5-28)
cc_compile_by : mockbuild
cc_compile_domain : [unknown]
cc_compile_date : Tue Apr 14
18:28:14 CEST 2020
build_id :
5ad6f12499d7f264544b64568b378260cd82a65f
xend_config_format : 4
I'm on XCP-ng 8.1. Other diff is
also I have more GHz than you. So I
ran the test on another server
(building a VM just for you :p ) and
here is the result for a Xeon E5-2650L
v2 @ 1.70GHz (slow!) and VM disk
stored on a NFS share.
real 0m5,925s
user 0m3,769s
sys 0m2,321s
Still, far better than 20 seconds
you have!
Let me know if you need further
help :)
Best,
Olivier.
Hi Oliver.
I am testing a bit more. In seconds,
the results of the command is:
Debian Buster PV -> 18'
Debian Buster HVM -> 8'
Debian Buster PVHVM -> 8'
Debian Buster PVH -> 8'
xl info
release :
4.19.0-8-amd64
version : #1 SMP
Debian 4.19.98-1+deb10u1
(2020-04-27)
machine : x86_64
nr_cpus : 48
max_cpu_id : 47
nr_nodes : 2
cores_per_socket : 12
threads_per_core : 2
cpu_mhz : 2197.458
hw_caps :
bfebfbff:77fef3ff:2c100800:00000121:00000001:001cbfbb:00000000:00000100
virt_caps : hvm
hvm_directio
total_memory : 261890
free_memory : 255453
sharing_freed_memory : 0
sharing_used_memory : 0
outstanding_claims : 0
free_cpus : 0
xen_major : 4
xen_minor : 11
xen_extra : .4-pre
xen_version : 4.11.4-pre
xen_caps :
xen-3.0-x86_64 xen-3.0-x86_32p
hvm-3.0-x86_32 hvm-3.0-x86_32p
hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params :
virt_start=0xffff800000000000
xen_changeset :
xen_commandline : placeholder
dom0_mem=2048M,max:4065M
cc_compiler : gcc (Debian
8.3.0-6) 8.3.0
cc_compile_by :
pkg-xen-devel
cc_compile_domain : lists.alioth.debian.org
cc_compile_date : Wed Jan 8
20:16:51 UTC 2020
build_id :
b6822aa1d8f867753b92985e5cb0e806e520a08c
xend_config_format : 4
Oliver, I got > double values
than you. Where is the problem?
Regards,
Agustín
El 2/5/20 a las 19:56, Olivier
Lambert escribió:
Hi Agustin,
I just did a test on XCP-ng
8.1 (Xen 4.13) with a fresh
Debian 10 VM, and here is the
result I have:
```
# time for i in `dpkg -L
ncurses-term | sort`; do if [
-f "$i" ]; then ls -ld "$i";
fi; done | tr -s " "| cut -d"
" -f5,9 >/dev/null
real 0m2,741s
user 0m2,248s
sys 0m0,574s
```
My hardware isn't ultra
modern: Xeon(R) CPU E3-1225 v5
(3.3Ghz) on a small Dell T30
machine, VM storage on local
HDD. I did the test 3 times,
and I have always results
between 2,6 and 2,8 secs.
Regards,
Olivier.
Hello.
We are testing low performance
in IO with the next command in
Debian Buster (kernel
4.19.0-8-amd64) with Xen
(4.11.4-pre)
time for i in `dpkg -L
ncurses-term | sort`; do if [
-f "$i" ]; then ls -ld "$i";
fi; done | tr -s " "| cut -d"
" -f5,9 >/dev/null
In all our Dom0s - DomUs we
are getting around 20 seconds.
In the same physical machines
booting with Debian without
Xen, we get 5-7 seconds
In some KVM VMs in other
server we are geting almost
the same as physical.
(all in local Disks. XFS
filesystems. Images of DomUs
in raw format)
I have booted Xen with 4.8 y
4.4 releases with almost the
same bad data.
Where could be the problem?
I think of is not normal this
difference between DomUs and
physical machine.
Every pointer will be
welcomed.
Best regards,
Agustín
|