[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ping: [PATCH 0/4] HVM: produce better binary code

At 17:16 +0100 on 04 Sep (1378315004), Andrew Cooper wrote:
> On 04/09/13 11:06, Jan Beulich wrote:
> >>>> On 23.08.13 at 15:58, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> > While I got two reviews meanwhile for this series (thanks Andrew
> > and Tim!), ack-s from the maintainers are still missing:
> >
> >> 1: VMX: streamline entry.S code
> >> 2: VMX: move various uses of UD2 out of fast paths
> >> 3: VMX: use proper instruction mnemonics if assembler supports them
> > Jun, Eddie? (Yes, there had been a couple of revisions to patches
> > 2 and 3, but even their sending was now more than a week ago.)
> >
> >> 4: SVM: streamline entry.S code
> > Suravee, Boris, Jacob?
> >
> > I'm going to wait a for perhaps another day or two, and will assume
> > silent agreement if I don't hear otherwise. I'll similarly assume silent
> > agreement to the discussed follow-up patches (dropping memory
> > barriers in a few places as well as converting __vmread() along the
> > lines of __vmread_safe()) once done, submitted, and reviewed.
> >
> > Jan
> >
> One thing I have noticed from subsequent reading of processor manuals is
> that reading from control registers are not actually serialising.  It
> would be useful to get some comments about the in-processor optimisation
> for runs of pushes/pops, to determine whether it is a good idea
> interrupting the run.

The Intel optimization manual goes into some detail about it.  The short
version is that it introduces a dependence which can otherwise be
finagled away by cunning logic in the instruction decoder.  It's listed
as 'medium impact, medium generality' which is about as much detail as
we can expect without measuring the actual code on the processors we
care about.

I'm not certain, on a modern CPU, whether it's better to issue a series
of PUSHes or a series of %rsp-relative MOVs and a SUB.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.