[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] add_random must be set to 1 for me - Archlinux HVM x64 - XenServer 7 Latest Patched


  • To: xen-users@xxxxxxxxxxxxx
  • From: WebDawg <webdawg@xxxxxxxxx>
  • Date: Wed, 19 Oct 2016 13:40:06 -0500
  • Delivery-date: Wed, 19 Oct 2016 18:41:36 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

I know this is not the XenServer list and I am sorry if this message
rubs anyone the wrong way or I am completely off/ignorant.  I have
never had to do any disk tuning in Xen/XenServer.  I have ran both Xen
by itself and XenServer.  This has come up in my XenServer  instance
and if someone could test in pure Xen that would be great.

I have two forum posts going right now that are right here for this:
*https://bbs.archlinux.org/viewtopic.php?id=218405
*https://discussions.citrix.com/topic/381981-archlinux-hvm-domu-slow-disk-access-100-cpu-xenserver-7/

That no one has replied to.

When I dd to a disk in an Archlinux HVM instance fully up to date with
just the standard linux kernel...I get 100% domU cpu inside of it with
top (dd is at %100).  I also get 100% cpu usage in xentop on dom0.

I also get about 2-4MB a second IO.

I can make this go away by doing this:

echo 1 > /sys/block/xvda/queue/add_random

My Debian domU instances have add_random = 1 so that is why I tried it
because they are working ask expected and I was trying to work it out
because I could not find any valid information on the internet that
could help me.

No more CPU use issues and the same speed as the Debian DomU's.

It looks like by default Archlinux is on block-multiqueue and I do not
know if I can go back, because I have not looked harder into going
back/testing that way to fix.  The only reason I think this is because
some of my queue options are not changeable/disabled and I have a mq
directory in all of my devices.  I am getting this information from
here:  https://bugzilla.novell.com/show_bug.cgi?id=911337 so I could
be wrong.

Reading 
https://wiki.archlinux.org/index.php/Improving_performance#Tuning_IO_schedulers

The Archlinux wiki still talks about enabling the block-multiqueue
layer by using scsi_mod.use_blk_mq=1 but I did not do that so it must
be just enabled now or something?

If someone could shed some insight why enabling IO generation/linking
of timing/entropy data to /dev/random makes the 'system work' this
would be great.  Like I said, I am just getting into this and I will
be doing more tuning if I can.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.