[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] HTPC + DUAL PC In one
On 07/16/2014 01:07 PM, Gordan Bobic wrote: > On 2014-07-16 16:08, Austin S Hemmelgarn wrote: >> Hyper-Threading shares a lot more than just the FPU between threads, >> IIRC the only thing in HT that isn't potentially shared is the >> registers, so unless the OS is built to schedule intelligently, >> performance on a HT cpu can be really terrible. > > The throughput should always go up in the general case. Under > a saturating MySQL load it goes up about 10-15%, for example. > The reason HT helps is 2-fold: > > 1) It reduces the number of context switches because two > contexts are handled by the hardware at the same time > > 2) It improves benefits from CPU caches. Data in RAM is very > far away, and you don't want the CPU to sit idle while the MCH > is fetching it. While this is happening, if you have another > thread pre-loaded and ready to execute, you can check if the data > that threads requires is already in the cache, and if it is, > run it. > I'm just trying point out that HT is a less efficient solution than for example SMT on a UltraSPARC T1. Also, I don't entirely agree with either of your points: 1. Would be great, except that for 2-way HT you still have at-least 4 different states things could be in context-wise (even ignoring interrupt and SMM contexts and different contexts for each individual process), and any time it transitions between states, both threads get stopped. 2. Would also be great, except that the L1-cache is shared between the multiple threads on a given core (on AMD's bulldozer processors, each 'core' gets it's own L1-cache, and the L2 and L3 are shared). In general, most up-to-date OSes do a decent job of handling HT, but most of them (especially Windows and Solaris) still leave a lot to be desired. Attachment:
smime.p7s _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |