[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] HTPC + DUAL PC In one

  • To: xen-users@xxxxxxxxxxxxx
  • From: Gordan Bobic <gordan@xxxxxxxxxx>
  • Date: Wed, 16 Jul 2014 18:07:47 +0100
  • Delivery-date: Wed, 16 Jul 2014 17:08:04 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

On 2014-07-16 16:08, Austin S Hemmelgarn wrote:
Hyper-Threading shares a lot more than just the FPU between threads,
IIRC the only thing in HT that isn't potentially shared is the
registers, so unless the OS is built to schedule intelligently,
performance on a HT cpu can be really terrible.

The throughput should always go up in the general case. Under
a saturating MySQL load it goes up about 10-15%, for example.
The reason HT helps is 2-fold:

1) It reduces the number of context switches because two
contexts are handled by the hardware at the same time

2) It improves benefits from CPU caches. Data in RAM is very
far away, and you don't want the CPU to sit idle while the MCH
is fetching it. While this is happening, if you have another
thread pre-loaded and ready to execute, you can check if the data
that threads requires is already in the cache, and if it is,
run it.

Also, just an aside, the FPU on a given module on Bulldozer based CPU's
(like the FX-8350) is only shared for 256-bit vector operations. If you
are using something like Gentoo, you can build your packages with the
GCC option '-mprefer-avx128' to force the usage of 128-bit vector ops,
and the system will behave like each core has it's own FPU.

Also, as a second aside, Xen makes very little usage of the FPU, and
most non-media non-number-crunching stuff on Linux doesn't use it much

Indeed, FPU usage will be very application specific and most don't
use it. SPARC T1 CPU does something similar with sharing the FPU
between cores because most servers load do little or no FP


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.