[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Give dom0 2 pinned vcpus, but share one with domU



On Tue, 2014-05-20 at 05:22 -0700, jumperalex wrote:
> > How about NOT pinning / Why am I pinning?
> 
> In short because of this
> http://wiki.xen.org/wiki/Xen_Project_Best_Practices#Dedicating_a_CPU_core.28s.29_only_for_dom0
> I'm just doing what I'm told :O  But I'm obviously open to suggestion.

This is mentioned in http://wiki.xen.org/wiki/Tuning#Dom0_VCPUs and
http://wiki.xen.org/wiki/Tuning#Vcpu_Pinning too. It does say "might"
and "can", perhaps even those are a bit strong. Pinning is one tool in
the performance tuning arsenal but it is very workload dependent on
whether it will help or hurt (and it can be a lot in either direction).

I've made a note of this on
http://wiki.xen.org/wiki/Xen_Document_Days/TODO . Hopefully someone who
knows this tuning stuff better than I will improve things at some point.

Ian.

> 
> Now I can't claim my dom0 is doing HEAVY I/O but it is hosting my unraid
> array so any VM (one at this point running Plex Media Server) will be
> pulling 1080p video streams from it to transcode (fulfilling the heave domU
> workload bit) and then sending it back out to the clients on the network. 
> Soon I too plan on running some handbrake runs which will probably have my
> server screaming for several days straight and then about twice a week. 
> Those could be scheduled during times of the day I know there won't likely
> be user interaction but it will just prolong the overall job of converting
> my whole library. At the same time it is possible, but rare due to
> scheduling, that I could be hitting the array with two backup streams coming
> from PC's running Acronis.  
> 
> That is not quite the worst case scenario but the most likely. I could throw
> in a few other processes that do occur which are also pretty I/O heavy but
> those are really unlikely to overlap or it will happen when no one is around
> to see it.  And the two main culprits, my cpu heavy rsync and plex
> transcoding literally couldn't have happened at the same time because the VM
> gets paused to run the rsync copy of the VM image :)  As you'll see below
> though I've also solved the cpu hogging rsync
> 
> All that said, I'm fully willing to admit I'm probably spending 95% of my
> time chasing the last 5% of performance, but I like at least poking around
> to make sure I haven't left something huge ripe for the taking.
> 
> > There is an option to adjust the credit scheduler  - see
> > http://wiki.xen.org/wiki/Credit_Scheduler. More on Xen tuning can be found
> > http://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance. See also
> > http://wiki.xen.org/wiki/Performance_of_Xen_VCPU_Scheduling.
> 
> Thanks. I will definitely take a look.  If done right that seems like an
> even more elegant solution.
> 
> 
> > Note though that depending on your workload pinning (especially dom0) 
> > might be actively harmful. Is there some reason you want to pin rather 
> > than letting dom0's vcpus float?
> 
> Well I know my dom0 workload is generally pretty light from a CPU
> perspective.  Even a single core from an FX-8320 would generally be
> considered overkill for just handling the day-to-day of an unraid array. 
> What even brought it up was an rsync to backup my domU.img into the dom0
> array was just crushing my dom0 cpu and choking off the rsync.  BUT ... I
> found the main issue which was the use of -z for compression in rsync
> between local folders.  Once I turned that off cpu usage dropped and speed
> took off.  So I've solved my current problem via efficiency vs. brute force
> (my preferred way), but it still has me thinking it might not be a bad idea
> to let dom0 have the option of a little bit more.
> 
> I did try it out last night while watching xl vcpu-list, xl top, and htop in
> both dom's.  I ran rsync with -z and noticed improvement which didn't
> surprise me. Then I ran a transcode.  It is hard to confirm performance
> improvements there if you're just going from 6 cpus to 7 so I was mostly
> just looking to see that seven distinct PCPU's were being used. At first I
> wasn't sure I was really seeing pcpu1 being shared like you said, but after
> I looked at the screen shots later in the evening I convinced myself maybe
> it was working as hoped.  Then I woke up to your post.  So I'll probably
> change it back again and observe some more while I also read up on Credit
> Scheduling.
> 
> Thank you for indulging me.  Cheers.
> 
> 
> 
> --
> View this message in context: 
> http://xen.1045712.n5.nabble.com/Give-dom0-2-pinned-vcpus-but-share-one-with-domU-tp5722792p5722800.html
> Sent from the Xen - User mailing list archive at Nabble.com.
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.