[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 01/12] x86/rtc: drop code related to strict mode


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 3 May 2021 16:47:55 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=55J7BXfXNPgVyghL59YD0llMk3eR8G1yedZKy4xUUko=; b=TgOfz1nLN0Bef/5Tv38HDNSOnZeNnZlOvrX7z3rZUrS+iQ1qZZi201ouAWg3P/yrnhgtFwo5hA3O+CxCIxKyMrjxS9qBgMMxEfnNAttfOvJEO1sV70ExVhKd3+ZeN+KVwjNe7Kem092tjc/ejAwal1gMHRxcVr32jJI0d5S9QW5+6J1RZLOg3dFbsYhUMcl69K6b+ayEv2pzORnjZNBh5coIQyP5136UGnG7ALKAsd/7wWSovmYhLHVCt1SUvzmZhW9C9bq/JYdFBxoGvs5asy1H6oJloYPJjJ8zlRxMk63jUivbK/wyqBTeRa8/lzzuzbOeJEqHF0bsWbOSmA5HbA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YKiN2IftHEkTaeCZDp8th+bQxtjQBA85Gn7nhoEApom0LXreNFCw4DhmOkDJbW4pLhQToY06Jwx7+GC8Umd44QwprTX0yGNNkKV3WEipG2QxiAoIdmZ4vrysfveYBZ4BtDwxcBJmPKmkrawjFDXWmicLNJJJGcpcu18aBp21P3X89jRqbpU/Rk8TtCAXuQ4FgNp/ykmqhueNdEGVYOdODINRVSXIoDMk6uLDrCzV+qKvG2SZ5M5Gygiud31WG8t/nfaJp8DHM9ow53y8903UctzkxH5EHnW0M1wN2pD2MRSiawGPY+ZL9S3o+Uw1mp7zak+k5KCItreTeyrW3592CQ==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 03 May 2021 14:48:09 +0000
  • Ironport-hdrordr: A9a23:Un7Yaq/9GeQqraSDxtVuk+FqcL1zdoIgy1knxilNYDRvWIixi9 2ukPMH1RX9lTYWXzUalcqdPbSbKEmzybdc2qNUGbu5RgHptC+TLI9k5Zb/2DGIIULD38Zn/+ Nbf6B6YeedMXFTkdv67A6kE9wp3dmA9+SSif3Dymp2JDsLV4hLxW5Ce2CmO2dxQxRLAod8OZ qH/8xcpyehf3N/VLXHOlAuWe/fq9rX0K/8aRkdCBI9rCWIhzWk6Ln1eiLoois2eTVJ3Lsk7C z5gxX0j5/Tyc2T5z398yvo75pQkMb80dcrPq2xo+UcNzmEsHfMWK1PQLuH1QpFxN2HyFFvq9 XUpgdlAsIb0QKvQkiQgT/Anzbtyywv7XiK8y7qvVLGrdbiTDw3T+pt7LgpCifx0EYrsNFi3K 8j5Qvw3PA7fHCw/lWJ2/HyWx5njUayq3Y5+NRj9EB3aocCdKRX6bUW4UI9KuZxIAvB9IslHO NyZfusncp+TFXyVQG/gkBfhPaoXng1Ay6cRFkDtsG/w1Ft7QFE5npd68oFknga8pUhD7FC+u TfK6xt0IpDV8kMcMtGdag8aPryLlaIbQPHMWqUL1iiPKYbO0jVo5qyxLku/umldLEB0ZNaou WObHpo8UoJP27+A8yH25NGtjrXRn+mYDjrwsZCo7Bkp7zVXtPQQG6+YWFrt/Hlj+QUA8XdVf r2EolRGeXfIWznHpsM9xHiWqNVNWIVXKQuy5YGcmPLhviOBpzht+TdfvqWDqHqCywYVmT2BW ZGcyP0IOlG80C3Sl71iBXcQBrWCwnC1KM1NJKf0/kYyYALOIEJmBMSk06F6saCLiAHkqFeRj o7HJrX1oeA4UWm92fB6GtkfjBHCFxO3bnmW3RW4SsDM0b+d6c/q8ySEFoim0evF1tadYf7AQ Rfr1N49eacNJqL3x0vDNqhLya8g2YMommJC7MRgLeK68ugWp5QNOdmZIVBUSHwUzBlkwdjr2 lOLCUeQFXEKz/ogaK5yLoOBO/ecNF4qByxIdFdrE/esUn0n7BselIrGxqVFeKHiwcnQDRZwn dr9bUEvbaGkTGzbVckjP8AK11KYmSPCLdgBACIDb8k3IzDSUVVdyOnlDaagxY8di7P+18Jjm LsFyGSZMrGG0FQoHxez6bs/m5lb2n1RTMDVllK9alGUUjWsHd61uGGIpC+1GaccXMu6OAQOj OtW0pZHipeg/SMkDKFkjeLEnsrgqg0NuvGFbI5bvX4wXW2MrCFkqkAAt5Z9JtoL8rVr+cOSO 6TEjXldQ/QOqcM4Ui4t3wlMC57pD0YivvuwgTi93X983glA/beSW4WDo0zEpW51SzDSPmJ2p ki0o5wkuu0L2nratmJjYvQdCVOLxvPoWiwC8EkwKokyp4ahf9WJd38VzCN6VRsmDMZB+3wnF kFQKt67KvaU7UfNPA6SmZ8xB4RiN+LLEEXqQT4De81QEE1gxbgTqe0youNjYBqP1aIqwTxM2 SO6iFx///KWC2YyL4RYphAV1h+WQwZ6H54+vmFeJCVIAK2d/tb9F7SCA7xTJZtDIyEE64XtB B0/pWhmPKWbTPx3ET1sSFgKqxDt0ahTsXaOnPBJcd4t/i7M0+LmK2k/Yqaiyr2UyKybwAgvr J+HHZgJvhru30Fl4040i+7V6zxrAYEqjJlkE9av2+o/JOn7mfdFVxBKivDjPxtLGFuDkQ=
  • Ironport-sdr: uhpS1FFOEnXjnvNHCF21VoeQM6NThp0VMsdyxhN1qsk5jeDv/nFhLp4a8uIrPEhHyvFzqpJjAi /hrN6H0gMn1hpSyeVOrHPYagU68QxrUlNhkZH1VN+YOgkRqMkl0IdwaKUlQhBntMUds0ZbFuYq dYuqXoXm4SSy20u+Y7au3mPXaw3HBb4ohCA8N6cFgeXWLUt8xd6n7RgugZtTBpa4N0BLQ5UXJb j6cWJHPqydMYu9vBvoXMhYajT7X51oC5jzBjd9AmT9aGvQJZ/q8gZkVcxr34k62IBb1PwKrCCl 5GA=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, May 03, 2021 at 02:26:51PM +0200, Jan Beulich wrote:
> On 03.05.2021 11:28, Roger Pau Monné wrote:
> > On Thu, Apr 29, 2021 at 04:53:07PM +0200, Jan Beulich wrote:
> >> On 20.04.2021 16:07, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/hvm/rtc.c
> >>> +++ b/xen/arch/x86/hvm/rtc.c
> >>> @@ -46,15 +46,6 @@
> >>>  #define epoch_year     1900
> >>>  #define get_year(x)    (x + epoch_year)
> >>>  
> >>> -enum rtc_mode {
> >>> -   rtc_mode_no_ack,
> >>> -   rtc_mode_strict
> >>> -};
> >>> -
> >>> -/* This must be in sync with how hvmloader sets the ACPI WAET flags. */
> >>> -#define mode_is(d, m) ((void)(d), rtc_mode_##m == rtc_mode_no_ack)
> >>> -#define rtc_mode_is(s, m) mode_is(vrtc_domain(s), m)
> >>
> >> Leaving aside my concerns about this removal, I think some form of
> >> reference to hvmloader and its respective behavior should remain
> >> here, presumably in form of a (replacement) comment.
> > 
> > What about adding a comment in rtc_pf_callback:
> > 
> > /*
> >  * The current RTC implementation will inject an interrupt regardless
> >  * of whether REG_C has been read since the last interrupt was
> >  * injected. This is why the ACPI WAET 'RTC good' flag must be
> >  * unconditionally set by hvmloader.
> >  */
> 
> For one I'm unconvinced this is "must"; I think it is "may". We're
> producing excess interrupts for an unaware guest, aiui. Presumably most
> guests can tolerate this, but - second - it may be unnecessary overhead.
> Which in turn may be why nobody has complained so far, as this sort of
> overhead my be hard to notice. I also suspect the RTC may not be used
> very often for generating a periodic interrupt.

I agree that there might be some overhead here, but asking for the
guest to read REG_C in order to receive further interrupts also seems
like quite a lot of overhead because all the interception involved.
IMO it's best to unconditionally offer the no_ack mode (like Xen has
been doing).

Also strict_mode wasn't really behaving according to the spec either,
as it would injected up to 10 interrupts without the user have read
REG_C.

> (I've also not seen the
> flag named "RTC good" - the ACPI constant is ACPI_WAET_RTC_NO_ACK, for
> example.)

I'm reading the WAET spec as published my Microsoft:

http://msdn.microsoft.com/en-us/windows/hardware/gg487524.aspx

Where the flag is listed as 'RTC good'. Maybe that's outdated now?
Seems to be the official source for the specification from
https://uefi.org/acpi.

> >>> @@ -337,8 +336,7 @@ int pt_update_irq(struct vcpu *v)
> >>>      {
> >>>          if ( pt->pending_intr_nr )
> >>>          {
> >>> -            /* RTC code takes care of disabling the timer itself. */
> >>> -            if ( (pt->irq != RTC_IRQ || !pt->priv) && pt_irq_masked(pt) 
> >>> &&
> >>> +            if ( pt_irq_masked(pt) &&
> >>>                   /* Level interrupts should be asserted even if masked. 
> >>> */
> >>>                   !pt->level )
> >>>              {
> >>
> >> I'm struggling to relate this to any other part of the patch. In
> >> particular I can't find the case where a periodic timer would be
> >> registered with RTC_IRQ and a NULL private pointer. The only use
> >> I can find is with a non-NULL pointer, which would mean the "else"
> >> path is always taken at present for the RTC case (which you now
> >> change).
> > 
> > Right, the else case was always taken because as the comment noted RTC
> > would take care of disabling itself (by calling destroy_periodic_time
> > from the callback when using strict_mode). When no_ack mode was
> > implemented this wasn't taken into account AFAICT, and thus the RTC
> > was never removed from the list even when masked.
> > 
> > I think with no_ack mode the RTC shouldn't have this specific handling
> > in pt_update_irq, as it should behave like any other virtual timer.
> > I could try to split this as a separate bugfix, but then I would have
> > to teach pt_update_irq to differentiate between strict_mode and no_ack
> > mode.
> 
> A fair part of my confusion was about "&& !pt->priv".

I think you meant "|| !pt->priv"?

> I've looked back
> at 9607327abbd3 ("x86/HVM: properly handle RTC periodic timer even when
> !RTC_PIE"), where this was added. It was, afaict, to cover for
> hpet_set_timer() passing NULL with RTC_IRQ.

That's tricky, as hpet_set_timer hardcodes 8 instead of using RTC_IRQ
which makes it really easy to miss.

> Which makes me suspect that
> be07023be115 ("x86/vhpet: add support for level triggered interrupts")
> may have subtly broken things.

Right - as that would have made the RTC irq when generated from the
HPET no longer be suspended when masked (as pt->priv would no longer
be NULL). Could be fixed with:

diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index ca94e8b4538..f2cbd12f400 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -318,7 +318,8 @@ static void hpet_set_timer(HPETState *h, unsigned int tn,
                          hpet_tick_to_ns(h, diff),
                          oneshot ? 0 : hpet_tick_to_ns(h, h->hpet.period[tn]),
                          irq, timer_level(h, tn) ? hpet_timer_fired : NULL,
-                         (void *)(unsigned long)tn, timer_level(h, tn));
+                         timer_level(h, tn) ? (void *)(unsigned long)tn : NULL,
+                         timer_level(h, tn));
 }
 
 static inline uint64_t hpet_fixup_reg(

Passing again NULL as the callback private data for edge triggered
interrupts.

> > Would you be fine if the following is added to the commit message
> > instead:
> > 
> > "Note that the special handling of the RTC timer done in pt_update_irq
> > is wrong for the no_ack mode, as the RTC timer callback won't disable
> > the timer anymore when it detects the guest is not reading REG_C. As
> > such remove the code as part of the removal of strict_mode, and don't
> > special case the RTC timer anymore in pt_update_irq."
> 
> Not sure yet - as per above I'm still not convinced this part of the
> change is correct.

I believe part of this handling is kind of bogus - for example I'm
unsure Xen should account masked interrupt injections as missed ticks.
A guest might decide to mask it's interrupt source for whatever
reason, and then it shouldn't receive a flurry of interrupts when
unmasked. Ie: missed ticks should only be accounted for interrupts
that should have been delivered but the guest wasn't scheduled. I
think such model would also simplify some of the logic that we
currently have.

In fact I have a patch on top of this current series which I haven't
posted yet that does implement this new mode of not accounting masked
interrupts as missed ticks to the delivered later.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.