[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 0/8] x86/hvm, libxl: HVM SMT topology support



On Fri, Feb 26, 2016 at 04:03:46PM +0100, Dario Faggioli wrote:
> On Thu, 2016-02-25 at 17:21 +0000, Andrew Cooper wrote:
> > On 22/02/16 21:02, Joao Martins wrote:
> > > 
> > > Any comments are appreciated!
> > Hey.  Sorry I am late getting to this - I am currently swamped.  Some
> > general observations.
> > 
> Hi,
> 
> I'm also looking forward to find the time to look at this series, but
> that will have to wait a few days more, I'm afraid.
> 
> However, one thing (coming from Andrew's comment).
> 
> > As part of my further cpuid work, I will need to fix this.  I was
> > planning to fix it by requiring full cpu topology information to be
> > passed as part of the domaincreate or max_vcpus hypercall  (not
> > chosen
> > which yet).  


You may not want to make a full CPU topology to be exposed to the guest.

Elena (CCed) found some oddities and it actually looked like the guest
performed _worst_ when it had this exposed and was floating (not-pinned)
on machines with SMT enabled.

Elena, could you confirm please? I can't recall the details..


> >
> If that means that, when creating a multi vcpus guest, it will be
> necessary to provide Xen with all the information about the
> relationship between these multiple vcpus (and that there will be some
> sensible default, of course), this would be *awesome*. :-)
> 
> At that point, one can just build on top of that, in order to achieve
> something like what is implemented in this series, or any other variant
> of it, which would indeed be *awesome* (did I said that already? :-D).

Maybe you should lay off the coffee for a bit ..

/me backs slowly away.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.