[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.14 v3] x86/rtc: provide mediated access to RTC for PVH dom0


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 8 Jun 2020 17:56:06 +0200
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, paul@xxxxxxx
  • Delivery-date: Mon, 08 Jun 2020 15:56:18 +0000
  • Ironport-sdr: TdRVyyo1WdLADr7FIMNHOZuR+T+wR9CKWiMtxMpEb3QPJgm/ybSpj2olenmc5Yzzipw/SqPL5E RM3hdg5St3t51PeEqInlFaENnzNAApIPJ51WiQASDUUkcijtq3Im4ibtKYg5wjMx9cHjRAa+F4 XSQO5LyK6YcR0bFzsNZcesRPFrJp/l0E9nwBNVDoLnvTHf7GY4KwiUben+B0SwQM9rU7qyue3Q y9emQU14tSTkQXrkhOOs8dEQd0i/S30ziZwFPVGOux0yAMfnHvUUYMRowm0dcHyG79bpvGfH1l ytk=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Jun 08, 2020 at 01:47:26PM +0200, Jan Beulich wrote:
> On 08.06.2020 12:29, Roger Pau Monne wrote:
> > Mediated access to the RTC was provided for PVHv1 dom0 using the PV
> > code paths (guest_io_{write/read}), but those accesses where never
> > implemented for PVHv2 dom0. This patch provides such mediated accesses
> > to the RTC for PVH dom0, just like it's provided for a classic PV
> > dom0.
> > 
> > Pull out some of the RTC logic from guest_io_{read/write} into
> > specific helpers that can be used by both PV and HVM guests. The
> > setup of the handlers for PVH is done in rtc_init, which is already
> > used to initialize the fully emulated RTC.
> > 
> > Without this a Linux PVH dom0 will read garbage when trying to access
> > the RTC, and one vCPU will be constantly looping in
> > rtc_timer_do_work.
> > 
> > Note that such issue doesn't happen on domUs because the ACPI
> > NO_CMOS_RTC flag is set in FADT, which prevents the OS from accessing
> > the RTC. Also the X86_EMU_RTC flag is not set for PVH dom0, as the
> > accesses are not emulated but rather forwarded to the physical
> > hardware.
> > 
> > No functional change expected for classic PV dom0.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> 
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> preferably with ...
> 
> > @@ -1110,6 +1111,67 @@ static unsigned long get_cmos_time(void)
> >      return mktime(rtc.year, rtc.mon, rtc.day, rtc.hour, rtc.min, rtc.sec);
> >  }
> >  
> > +/* Helpers for guest accesses to the physical RTC. */
> > +unsigned int rtc_guest_read(unsigned int port)
> > +{
> > +    const struct domain *currd = current->domain;
> > +    unsigned long flags;
> > +    unsigned int data = ~0;
> > +
> > +    switch ( port )
> > +    {
> > +    case RTC_PORT(0):
> > +        /*
> > +         * All PV domains are allowed to read the latched value of the 
> > first
> > +         * RTC port. This is useful in order to store data when debugging.
> > +         */
> 
> ... at least the 2nd sentence of this and ...
> 
> > +void rtc_guest_write(unsigned int port, unsigned int data)
> > +{
> > +    struct domain *currd = current->domain;
> > +    unsigned long flags;
> > +
> > +    switch ( port )
> > +    {
> > +    case RTC_PORT(0):
> > +        /*
> > +         * All PV domains are allowed to write to the latched value of the
> > +         * first RTC port. This is useful in order to store data when
> > +         * debugging.
> > +         */
> 
> ... this comment dropped again. This justification of the possible
> usefulness is my very private guessing. Just like the original code
> was, I think we could leave this uncommented altogether.

Hm, as you wish. I would prefer to leave something similar to the
first part of the comment, what about:

/*
 * All PV domains (and PVH dom0) are allowed to write/read to the
 * latched value of the first RTC port, as there's no access to the
 * physical IO ports.
 */

I can adjust and then add a newline after the break in the RTC_PORT(0)
case which I missed.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.