[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 00/49] xen: add core scheduling support
Out of curiosity, has there been any research done on whether or not it makes more sense to just disable CPU threading with respect to overall performance? In some of the testing that we did with OpenXT, we noticed in some of our tests a performance increase when hyperthreading was disabled. I would be curious what other research has been done in this regard. Either way, if threading is enabled, grouping up threads makes a lot of sense WRT some of the recent security issues that have come up with Intel CPUs. On Fri, Mar 29, 2019 at 11:03 AM Juergen Gross <jgross@xxxxxxxx> wrote: > > On 29/03/2019 17:56, Dario Faggioli wrote: > > On Fri, 2019-03-29 at 16:46 +0100, Juergen Gross wrote: > >> On 29/03/2019 16:39, Jan Beulich wrote: > >>>>>> On 29.03.19 at 16:08, <jgross@xxxxxxxx> wrote: > >>>> This is achieved by switching the scheduler to no longer see > >>>> vcpus as > >>>> the primary object to schedule, but "schedule items". Each > >>>> schedule > >>>> item consists of as many vcpus as each core has threads on the > >>>> current > >>>> system. The vcpu->item relation is fixed. > >>> > >>> the case if you arranged vCPU-s into virtual threads, cores, > >>> sockets, > >>> and nodes, but at least from the patch titles it doesn't look as if > >>> you > >>> did in this series. Are there other reasons to make this a fixed > >>> relationship? > >> > >> In fact I'm doing it, but only implicitly and without adapting the > >> cpuid related information. The idea is to pass the topology > >> information > >> at least below the scheduling granularity to the guest later. > >> > >> Not having the fixed relationship would result in something like the > >> co-scheduling series Dario already sent, which would need more than > >> mechanical changes in each scheduler. > >> > > Yep. So, just for the records, those series are, this one for Credit1: > > https://lists.xenproject.org/archives/html/xen-devel/2018-08/msg02164.html > > > > And this one for Credit2: > > https://lists.xenproject.org/archives/html/xen-devel/2018-10/msg01113.html > > > > Both are RFC, but the Credit2 one was much, much better (more complete, > > more tested, more stable, achieving better fairness, etc). > > > > In these series, the "relationship" being discussed here is not fixed. > > Not right now, at least, but it can become so (I didn't do it as we > > currently lack the info for doing that properly). > > > > It is/was, IMO, a good thing that everything work both with or without > > topology enlightenment (even for one we'll have it, in case one, for > > whatever reason, doesn't care). > > > > As said by Juergen, the two approaches (and hence the structure of the > > series) are quite different. This series is more generic, acts on the > > common scheduler code and logic. It's quite intrusive, as we can see > > :-D, but enables the feature for all the schedulers all at once (well, > > they all need changes, but mostly mechanical). > > > > My series, OTOH, act on each scheduler specifically (and in fact there > > is one for Credit and one for Credit2, and there would need to be one > > for RTDS, if wanted, etc). They're much more self contained, but less > > generic; and the changes necessary within each scheduler are specific > > to the scheduler itself, and non-mechanical. > > Another line of thought: in case we want core scheduling for security > reasons (to ensure always vcpus of the same guest are sharing a core) > the same might apply to the guest itself: it might want to ensure > only threads of the same process are sharing a core. This would be > quite easy with my series, but impossible for Dario's solution without > the fixed relationship between guest siblings. > > > Juergen > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxxx > https://lists.xenproject.org/mailman/listinfo/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |