[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] xen/arm: implement GICD_I[S/C]ACTIVER reads


  • To: George Dunlap <George.Dunlap@xxxxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Tue, 7 Apr 2020 16:25:23 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tB4PXvXZNP2UaS5EULefivloK69BEJlzmNqNjToyoCE=; b=b1mJxq2fbjBTD+4/EQJisV5vNSjD/W+Jn75uBwDq7hkV+rwtf4f3obCxqAz700VzIQQBy0d8PVZSFUE1Lo75trKaFK5USdZC1y5U77kddImmq0EtEA64bZz0dwTOLlxPy7f0cQmK3Kl6uyaMFKOyFZs+BsPsas7YeOgAt4400dBr874Oi6Vx96gHf7UDg0Hv3AFOzbAIsNIo18UoY3AV6oJr6DmSmOj6xVP1Cswr5hG1QFHnnQOUq/MxjJoD/rgyOEFQzCo/l1K6rKe6kIQsCgQPiMDkZ2gyVG2ByPqArjM51wcnZPdTPozdkOv5InecJXck18ne3+u6PCs2jOSEXg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V4mvOKstC87TOInv2eB4PZPE1xJW8H3ZlMqwF3kkivYEo3UOBvY30Un1wvjl63o1CEOX+bx2bDsK96y8j6sYSM0b38fYjHLozhkjJQLe2qy+UPPklhAW0ineA8Vh4rP2Vz0SOsiu09uEaB9wCtkPMhCIxkPVrlEdiUh51HyaHU9G3k2SM7yMy1X8x+x7xgvjZa9J6GZHTTgoz/al890yykTuIvQytWF/x97Dd4j+60yHkHOEjLWztPkUJmGniyD6dhpHO4RgKEIPbcLjT8ODFOVvWINH2v+3YY7/WOtMug6E1xikG6YEkH3uVrkJs6EfWh8eM7oef6zSI/Zpr2LPNw==
  • Authentication-results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=bestguesspass action=none header.from=arm.com;
  • Authentication-results-original: spf=none (sender IP is ) smtp.mailfrom=Bertrand.Marquis@xxxxxxx;
  • Cc: Peng Fan <peng.fan@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, "maz@xxxxxxxxxx" <maz@xxxxxxxxxx>, Wei Xu <xuwei5@xxxxxxxxxxxxx>, nd <nd@xxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>, Julien Grall <julien.grall.oss@xxxxxxxxx>
  • Delivery-date: Tue, 07 Apr 2020 16:25:38 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: spf=none (sender IP is ) smtp.mailfrom=Bertrand.Marquis@xxxxxxx;
  • Thread-index: AQHWB8CUPtV45V7pfkaBDuUahGESj6hj8BaAgAImRYCAABntAIABn/cAgAAM24CABI1SgIAADaYAgAFoMACAAAKBAA==
  • Thread-topic: [PATCH v2] xen/arm: implement GICD_I[S/C]ACTIVER reads



On 7 Apr 2020, at 17:16, George Dunlap <George.Dunlap@xxxxxxxxxx> wrote:



On Apr 6, 2020, at 7:47 PM, Julien Grall <julien@xxxxxxx> wrote:

On 06/04/2020 18:58, George Dunlap wrote:
On Apr 3, 2020, at 9:27 PM, Julien Grall <julien.grall.oss@xxxxxxxxx> wrote:

On Fri, 3 Apr 2020 at 20:41, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote:

On Thu, 2 Apr 2020, Julien Grall wrote:
As we discussed on Tuesday, the cost for other vCPUs is only going to be a
trap to the hypervisor and then back again. The cost is likely smaller than
receiving and forwarding an interrupt.

You actually agreed on this analysis. So can you enlighten me as to why
receiving an interrupt is a not problem for latency but this is?

My answer was that the difference is that an operating system can
disable interrupts, but it cannot disable receiving this special IPI.

An OS can *only* disable its own interrupts, yet interrupts will still
be received by Xen even if the interrupts are masked at the processor
(e.g local_irq_disable()).

You would need to disable interrupts one by one as the GIC level (use
ICENABLER) in order to not receive any interrupts. Yet, Xen may still
receive interrupts for operational purposes (e.g serial, console,
maintainance IRQ...). So trap will happen.
I think Stefano’s assertion is that the users he has in mind will be configuring the system such that RT workloads get a minimum number of hypervisor-related interrupts possible.  On a 4-core system, you could  have non-RT workloads running on cores 0-1, and RT workloads running with the NULL scheduler on cores 2-3.  In such a system, you’d obviously arrange that serial and maintenance IRQs are delivered to cores 0-1.
Well maintenance IRQs are per pCPU so you can't route to another one...

But, I think you missed my point that local_irq_disable() from the guest will not prevent the hypervisor to receive interrupts *even* the one routed to the vCPU itself. They will just not be delivered to the guest context until local_irq_enable() is called.

My understanding, from Stefano was that what his customers are concerned about is the time between the time a physical IRQ is delivered to the guest and the time the guest OS can respond appropriately.  The key thing here isn’t necessarily speed, but predictability — system designers need to know that, with a high probability, their interrupt routines will complete within X amount of cycles.

Further interrupts delivered to a guest are not a problem in this scenario, if the guest can disable them until the critical IRQ has been handled.

Xen-related IPIs, however, could potentially cause a problem if not mitigated.  Consider a guest where vcpu 1 loops over the register, while vcpu 2 is handling a latency-critical IRQ.  A naive implementation might send an IPI every time vcpu 1 does a read, spamming vcpu 2 with dozens of IPIs.  Then an IRQ routine which reliably finishes well within the required time normally suddenly overruns and causes an issue.

I don’t know what maintenance IRQs are, but if they only happen intermittently, it’s possible that you’d never get more than a single one in a latency-critical IRQ routine; and as such, the variatibility in execution time (jitter) wouldn’t be an issue in practice.  But every time you add a new unblockable IPI, you increase this jitter; particularly if this unblockable IPI might be repeated an arbitrary number of times.

(Stefano, let me know if I’ve misunderstood something.)

So stepping back a moment, here’s all the possible ideas that I think have been discussed (or are there implicitly) so far.

1. [Default] Do nothing; guests using this register continue crashing

2. Make the I?ACTIVER registers RZWI.

3. Make I?ACTIVER return the most recent known value; i.e. KVM’s current behavior (as far as we understand it)

4. Use a simple IPI with do_noop to update I?ACTIVER

4a.  Use an IPI, but come up with clever tricks to avoid interrupting guests handling IRQs.

5. Trap to Xen on guest EOI, so that we know when the 

This is possible to do on a per interrupt basis or when all interrupts in LR registers have all been handled (maintenance interrupt when there is nothing left to handle in LR registers, used to fire other interrupts if we have more pending interrupts then number of LR registers).

Maybe a solution making sure we get a maintenance interrupts once all interrupts in LR registers have been handled could be a good mitigation ?

This could allow to not sync other cores but would make sure the time until we will cleanup active interrupts is limited so the poller could be sure to have at some acceptable point in time the information.


6. Some clever paravirtualized option

Obviously nobody wants #1, and #3 is clearly not really an option now either.

#2 is not great, but it’s simple and quick to implement for now.  Julien, I’m not sure your position on this one: You rejected the idea back in v1 of this patch series, but seemed to refer to it again earlier in this thread.

#4 is relatively quick to implement a “dumb” version, but such a “dumb” version has a high risk of causing unacceptable jitter (or at least, so Stefano believes).

#4a or #6 are further potential lines to explore, but would require a lot of additional design to get working right.

I think if I understand Stefano’s PoV, then #5 would actually be acceptable — the overall amount of time spent in the hypervisor would probably be greater, but it would be bounded and predictable: once someone got their IRQ handler working reliably, it would likely continue to work.

It sounds like #5 might be pretty quick to implement; and then at some point in the future if someone wants to improve performance, they can work on 4a or 6.

I agree this could be a good mitigation.

Bertrand


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.