[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 3/3] x86/time: avoid reading the platform timer in rendezvous functions


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 29 Apr 2021 14:48:01 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FRthqBanK0WJd1E4h2/7VSBTh9DXU++Z91NzpJ9oJvQ=; b=S5mvZ36VsTozO7OInGMHzMm3oz5mhVCZPEDliuD4tvD3TFifPamgRhBzYidpuFGlZE5rhHdCtdRjx8bBI1AT1Yt0XOmu/f9Z+4SD9YvJAnZRyEeO5T0jIPUiujxgC6rq/wSPTk7Rg95GpkTwSLLOzp0qkRgJ4JLIhNZugjkaV0KpdxORFLiv9lfzVWEAZDH7+BtmAi+FFg99CNVu7IxyC0+9HBqazZbQxQrh7WCxCP83RGrTCxTuIwN8AHqM+RM92k2EC8UA8Fs9uERE8zxw9p/6fUIsnA+PFxGEFRh5WfrU2ZGDf7N/sj4LMqBKXPynzr/ETEaEHgKYMzFcoIVGHw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lKEuqyd6E7wImRZHcnV/ZuY6UP/uBgJnvDb8FFSwbGm/H9b7RPGPmjiNAILH0tsePQqnJRxGHMkuvOVPlLPHt/Csj7IQnd/KqG1rg+POZDaWqkYb465JinFZVyexzwivmsfrAEOUDNMna55bNSAbAwMu68K+/PtznBfqW6XEqVIXtRMcdHL577xk218kL75R/XJmyT1QIlY8bvVH+bLIj1t4eWrwT8A2iDlxtkK0d0pwdvnO+mewexPG1LY+bmUGz5DjgEolGAChSKcQ4cEZ6bZO4cQ0RmDucxCj/KK/qIlKPi+Hz0upP9qH60plyl6vVhYF7rCSFTUE9q/sLsQW5w==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Thu, 29 Apr 2021 12:48:33 +0000
  • Ironport-hdrordr: A9a23:86okcKM9l5ou6cBcTg2jsMiAIKoaSvp033AA0UdtRRtJNvGJjs zGppgm/DL9lTp5YhsdsP+aPq3oexPh3LpUxaVUAru4RgnhvwKTQ71KyYf52TXvF2nf24dmpM RdWpNzAtHxElR25PySiGLUL/8b3NKF/Kq07N2x815RS2hRCp1I3kNBEQCcHVRxRA5aQaU4D4 aHovVKvCChf3N/VLXYOkU4
  • Ironport-sdr: ZRCgqkzlZgsf0qw6zHWzCXlOtyxfwNQhB80hSHLy0oE7eNwHKCODsjghKKUuf2seaLSpdfN/// iLR8J0OnoY+7/0Hi2MK857pRARE1bfGJ6rHYIbkZnCaW7F1m8aEuQ/SwiFG1TW9c3ZDj2RhjSv 3+5g2a8bJ54BXPT6vx/NV58awiq9xleAMxvbBZg57KucXr5SVwJMJ0gD1bHCQz5KKI9shrjTOL 8ms0m9Z+jIOxaQTHFJzkv36EsYi+2S/II/Z6yiC+j2LI0C67cd0O8w4hPC73Vswlom6sPB/X2u 0fo=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Apr 21, 2021 at 12:06:34PM +0200, Jan Beulich wrote:
> On 20.04.2021 18:12, Roger Pau Monné wrote:
> > On Thu, Apr 01, 2021 at 11:55:10AM +0200, Jan Beulich wrote:
> >> Reading the platform timer isn't cheap, so we'd better avoid it when the
> >> resulting value is of no interest to anyone.
> >>
> >> The consumer of master_stime, obtained by
> >> time_calibration_{std,tsc}_rendezvous() and propagated through
> >> this_cpu(cpu_calibration), is local_time_calibration(). With
> >> CONSTANT_TSC the latter function uses an early exit path, which doesn't
> >> explicitly use the field. While this_cpu(cpu_calibration) (including the
> >> master_stime field) gets propagated to this_cpu(cpu_time).stamp on that
> >> path, both structures' fields get consumed only by the !CONSTANT_TSC
> >> logic of the function.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> ---
> >> v4: New.
> >> ---
> >> I realize there's some risk associated with potential new uses of the
> >> field down the road. What would people think about compiling time.c a
> >> 2nd time into a dummy object file, with a conditional enabled to force
> >> assuming CONSTANT_TSC, and with that conditional used to suppress
> >> presence of the field as well as all audited used of it (i.e. in
> >> particular that large part of local_time_calibration())? Unexpected new
> >> users of the field would then cause build time errors.
> > 
> > Wouldn't that add quite a lot of churn to the file itself in the form
> > of pre-processor conditionals?
> 
> Possibly - I didn't try yet, simply because of fearing this might
> not be liked even without presenting it in patch form.
> 
> > Could we instead set master_stime to an invalid value that would make
> > the consumers explode somehow?
> 
> No idea whether there is any such "reliable" value.
> 
> > I know there might be new consumers, but those should be able to
> > figure whether the value is sane by looking at the existing ones.
> 
> This could be the hope, yes. But the effort of auditing the code to
> confirm the potential of optimizing this (after vaguely getting the
> impression there might be room) was non-negligible (in fact I did
> three runs just to be really certain). This in particular means
> that I'm in no way certain that looking at existing consumers would
> point out the possible pitfall.
> 
> > Also, since this is only done on the BSP on the last iteration I
> > wonder if it really makes such a difference performance-wise to
> > warrant all this trouble.
> 
> By "all this trouble", do you mean the outlined further steps or
> the patch itself?

Yes, either the further steps or the fact that we would have to be
careful to not introduce new users of master_stime that expect it to
be set when CONSTANT_TSC is true.

> In the latter case, while it's only the BSP to
> read the value, all other CPUs are waiting for the BSP to get its
> part done. So the extra time it takes to read the platform clock
> affects the overall duration of the rendezvous, and hence the time
> not "usefully" spent by _all_ of the CPUs.

Right, but that's only during the time rendezvous, which doesn't
happen that often. And I guess that just the rendezvous of all CPUs is
biggest hit in terms of performance.

While I don't think I would have done the work myself, I guess there's
no reason to block it.

In any case I would prefer if such performance related changes come
with some proof that they do indeed make a difference, or else we
might just be making the code more complicated for no concrete
performance benefit.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.