[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] xen/arm: implement GICD_I[S/C]ACTIVER reads


  • To: Julien Grall <julien@xxxxxxx>
  • From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>
  • Date: Thu, 2 Apr 2020 10:19:57 -0700 (PDT)
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.60.83) smtp.rcpttodomain=xen.org smtp.mailfrom=xilinx.com; dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gcpzF3l6cBRhzFd2948ZXb8FfCjTNb6+Y7pZbkr2fwc=; b=C8rsgp4k+b5/0vnYM3+ylR7ud8dv9kJJTc6hzSN5EfhJGQx3lAOu08PdA0MLUmDD+0cRvzJ3JRIaC4XLqn+3cj09MHAdk9q2zAWq7fSSDWtIbLy5gl99kTc2bROzTOmbHL/IrHr1s9MzFg+CwpI1TghAjlGyX0tMQ1X1tHoG2po4RckGXLCb/sfps8AQ+rMu9OmciAXrbr26zxpnodfgNeaJruZqccr0Nf6JHzYcU9uHjn3MaGr5ruy0AYUI3R18GXAy9Ul8ovPR3X9J3cbgRNKQtgR9zJ78bKgH68et8WrKwxnnFH7SUVWYT5yvksucucmHC+d++rWPlvabSvteFg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ffObcS5vqR1wkFl6VHJdZlBGOTc0yaEwxJF21GoEhkmMN1beEmUAOpZq68Py3yIPnFhCZxmhJblQAXtQeivKndjHxfjN5n+SIbsvD1E++q2eG8ojNeuG71e3l7VYYReuuC06Go5bN5gH7j/8+oep6Px5rmv5OXpcosh+dJGgapbSRJ8msdM1Mdq2zybG+a4xbg46sQzEtmO4byoZnnOyjGbtgmwonj0oLZ0GGhjDNkLh0zFfutyNDk3kyki3OTqbpNMP0DrVEExpcLtHMEN/MOrJgiEWd6FT/7Z2kUS4gplojsYEfqCkCCcMI6Iqks3qQWA5A/OcO817o3mopMStcw==
  • Authentication-results: spf=pass (sender IP is 149.199.60.83) smtp.mailfrom=xilinx.com; xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=bestguesspass action=none header.from=xilinx.com;
  • Cc: Peng Fan <peng.fan@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, George.Dunlap@xxxxxxxxxx, Wei Xu <xuwei5@xxxxxxxxxxxxx>, Bertrand.Marquis@xxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>
  • Delivery-date: Thu, 02 Apr 2020 17:20:16 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, 1 Apr 2020, Julien Grall wrote:
> On 01/04/2020 01:57, Stefano Stabellini wrote:
> > On Mon, 30 Mar 2020, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 30/03/2020 17:35, Stefano Stabellini wrote:
> > > > On Sat, 28 Mar 2020, Julien Grall wrote:
> > > > > qHi Stefano,
> > > > > 
> > > > > On 27/03/2020 02:34, Stefano Stabellini wrote:
> > > > > > This is a simple implementation of GICD_ICACTIVER / GICD_ISACTIVER
> > > > > > reads. It doesn't take into account the latest state of interrupts
> > > > > > on
> > > > > > other vCPUs. Only the current vCPU is up-to-date. A full solution is
> > > > > > not possible because it would require synchronization among all
> > > > > > vCPUs,
> > > > > > which would be very expensive in terms or latency.
> > > > > 
> > > > > Your sentence suggests you have number showing that correctly
> > > > > emulating
> > > > > the
> > > > > registers would be too slow. Mind sharing them?
> > > > 
> > > > No, I don't have any numbers. Would you prefer a different wording or a
> > > > better explanation? I also realized there is a typo in there (or/of).
> > > Let me start with I think correctness is more important than speed.
> > > So I would have expected your commit message to contain some fact why
> > > synchronization is going to be slow and why this is a problem.
> > > 
> > > To give you a concrete example, the implementation of set/way instructions
> > > are
> > > really slow (it could take a few seconds depending on the setup). However,
> > > this was fine because not implementing them correctly would have a greater
> > > impact on the guest (corruption) and they are not used often.
> > > 
> > > I don't think the performance in our case will be in same order magnitude.
> > > It
> > > is most likely to be in the range of milliseconds (if not less) which I
> > > think
> > > is acceptable for emulation (particularly for the vGIC) and the current
> > > uses.
> > 
> > Writing on the mailing list some of our discussions today.
> > 
> > Correctness is not just in terms of compliance to a specification but it
> > is also about not breaking guests. Introducing latency in the range of
> > milliseconds, or hundreds of microseconds, would break any latency
> > sensitive workloads. We don't have numbers so we don't know for certain
> > the effect that your suggestion would have.
> 
> You missed part of the discussion. I don't disagree that latency is important.
> However, if an implementation is only 95% reliable, then it means 5% of the
> time your guest may break (corruption, crash, deadlock...). At which point the
> latency is the last of your concern.

Yeah I missed to highlight it, also because I look at it from a slightly
different perspective: I think IRQ latency is part of correctness.

If we have a solution that is not 100% faithful to the specification we
are going to break guests that rely on a faithful implementation of
ISACTIVER.

If we have a solution that is 100% faithful to the specification but
causes latency spikes it breaks RT guests.

But they are different sets of guests, it is not like one is necessarily
a subset of the other: there are guests that cannot tolerate any latency
spikes but they are OK with an imprecise implementation of ISACTIVER.

My preference is a solution that is both spec-faithful and also doesn't
cause any latency spikes. If that is not possible then we'll have to
compromise or come up with "creative" ideas.


> > It would be interesting to have those numbers, and I'll add to my TODO
> > list to run the experiments you suggested, but I'll put it on the
> > back-burner (from a Xilinx perspective it is low priority as no
> > customers are affected.)
> 
> How about we get a correct implementation merge first and then discuss about
> optimization? This would allow the community to check whether there are
> actually noticeable latency in their workload.

A correct implementation to me means that it is correct from both the
specification point of view as well as from a latency point of view. A
patch that potentially introduces latency spikes could cause guest
breakage and I don't think it should be considered correct. The
tests would have to be done beforehand.



In terms of other "creative" ideas, here are some:

One idea, as George suggested, would be to document the interface
deviation. The intention is still to remove any deviation but at least
we would be clear about what we have. Ideally in a single place together
with other hypervisors. This is my preference.

Another idea is that we could crash the guest if GICD_ISACTIVER is read
from a multi-vcpu guest. It is similar to what we already do today but
better because we would do it purposely (not because of a typo) and
because it will work for single vcpu guests at least.

We could also leave it as is (crash all the time) but it implies that
vendors that are seeing issues with Linux today will have to keep
maintaining patches in their private trees until a better solution is
found. This would also be the case if we crash multi-vcpus guests as
previously suggested.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.