[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.14 v2] x86/rtc: provide mediated access to RTC for PVH dom0


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 5 Jun 2020 17:17:48 +0200
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, paul@xxxxxxx
  • Delivery-date: Fri, 05 Jun 2020 15:18:06 +0000
  • Ironport-sdr: D0PnSQOv/L+HoZMJ0K5wPBHnoIgB6sl+ZXbvRR+Fg38kWWX/HNk4Q0cgA+7yFRkyFBYsDDWPLw BXwz/PntUvbgkQNzYks57ubLj8+Tg/TUE4It1w6Zr7/t3XO74N8CEhKEOrd58Pt0j2EmKu7qJD Edc3xfhRxTrEGRAbvtN9uLB6eHsMT8XXWj1cd2+pj119uHTFL7UvEgrmeHsxT7P9mZwm2rZvIl usjoI79M38BumwXw0dqlddLKxMAcaAU33mEdIG3SO2yZ7Yqxz+RgXkuizV1bEcnViq0wHCJ4NE RsM=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Jun 05, 2020 at 04:44:32PM +0200, Jan Beulich wrote:
> On 05.06.2020 13:02, Roger Pau Monne wrote:
> > Mediated access to the RTC was provided for PVHv1 dom0 using the PV
> > code paths (guest_io_{write/read}), but those accesses where never
> > implemented for PVHv2 dom0. This patch provides such mediated accesses
> > to the RTC for PVH dom0, just like it's provided for a classic PV
> > dom0.
> > 
> > Pull out some of the RTC logic from guest_io_{read/write} into
> > specific helpers that can be used by both PV and HVM guests. The
> > setup of the handlers for PVH is done in rtc_init, which is already
> > used to initialize the fully emulated RTC.
> > 
> > Without this a Linux PVH dom0 will read garbage when trying to access
> > the RTC, and one vCPU will be constantly looping in
> > rtc_timer_do_work.
> > 
> > Note that such issue doesn't happen on domUs because the ACPI
> > NO_CMOS_RTC flag is set in FADT, which prevents the OS from accessing
> > the RTC. Also the X86_EMU_RTC flag is not set for PVH dom0, as the
> > accesses are not emulated but rather forwarded to the physical
> > hardware.
> > 
> > No functional change expected for classic PV dom0.
> 
> But there is, in whether (virtual) port 0x71 can be read/written (even
> by a DomU). I'm afraid of being called guilty in splitting hair, though.

Urg, OK, I realized that but considered it a harmless mistake.

> > @@ -808,10 +809,43 @@ void rtc_reset(struct domain *d)
> >      s->pt.source = PTSRC_isa;
> >  }
> >  
> > +/* RTC mediator for HVM hardware domain. */
> > +static int hw_rtc_io(int dir, unsigned int port, unsigned int size,
> > +                     uint32_t *val)
> > +{
> > +    if ( dir == IOREQ_READ )
> > +        *val = ~0;
> > +
> > +    if ( size != 1 )
> > +    {
> > +        gdprintk(XENLOG_WARNING, "bad RTC access size (%u)\n", size);
> > +        return X86EMUL_OKAY;
> > +    }
> > +    if ( !ioports_access_permitted(current->domain, port, port) )
> 
> This wants to move into the helper, such that the PV side can have
> it moved too.
> 
> >  void rtc_init(struct domain *d)
> >  {
> >      RTCState *s = domain_vrtc(d);
> >  
> > +    if ( is_hardware_domain(d) )
> > +    {
> > +        /* Hardware domain gets mediated access to the physical RTC. */
> > +        register_portio_handler(d, RTC_PORT(0), 2, hw_rtc_io);
> > +        return;
> 
> Any reason for this explicit return, rather than ...
> 
> > +    }
> > +
> >      if ( !has_vrtc(d) )
> >          return;
> 
> ... making use of this one? In fact wouldn't it be more correct
> to have
> 
>     if ( !has_vrtc(d) )
>     {
>         /* Hardware domain gets mediated access to the physical RTC. */
>         if ( is_hardware_domain(d) )
>             register_portio_handler(d, RTC_PORT(0), 2, hw_rtc_io);
>         return;
>     }
> 
> such that eventual (perhaps optional) enabling of vRTC for hwdom
> would have it properly work without changing this function again?

Right, that seems fine to me.

> > --- a/xen/arch/x86/pv/emul-priv-op.c
> > +++ b/xen/arch/x86/pv/emul-priv-op.c
> > @@ -280,19 +280,10 @@ static uint32_t guest_io_read(unsigned int port, 
> > unsigned int bytes,
> >          {
> >              sub_data = pv_pit_handler(port, 0, 0);
> >          }
> > -        else if ( port == RTC_PORT(0) )
> > -        {
> > -            sub_data = currd->arch.cmos_idx;
> 
> Note how there was no permission check here. Having one or more
> I/O ports that can be used to simply latch a value can, as I've
> learned, be quite valuable as a debugging vehicle, and there
> aren't many (if any) ports beyond this one that a PV DomU might
> use for such a purpose. Arguably the value is somewhat limited
> here, as the value wouldn't survive a crash, but I'd still
> prefer if we could retain prior functionality.

OK, as said above I considered this a harmless mistake, but seeing as
you find it valuable I will make sure to keep the behavior.

> > @@ -1110,6 +1111,64 @@ static unsigned long get_cmos_time(void)
> >      return mktime(rtc.year, rtc.mon, rtc.day, rtc.hour, rtc.min, rtc.sec);
> >  }
> >  
> > +/* Helpers for guest accesses to the physical RTC. */
> > +unsigned int rtc_guest_read(unsigned int port)
> > +{
> > +    const struct domain *currd = current->domain;
> > +    unsigned long flags;
> > +    unsigned int data = ~0;
> > +
> > +    ASSERT(port == RTC_PORT(0) || port == RTC_PORT(1));
> 
> Instead of this, how about ...
> 
> > +    if ( !ioports_access_permitted(currd, port, port) )
> > +    {
> > +        ASSERT_UNREACHABLE();
> > +        return data;
> > +    }
> > +
> > +    switch ( port )
> > +    {
> > +    case RTC_PORT(0):
> > +        data = currd->arch.cmos_idx;
> > +        break;
> > +
> > +    case RTC_PORT(1):
> > +        spin_lock_irqsave(&rtc_lock, flags);
> > +        outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
> > +        data = inb(RTC_PORT(1));
> > +        spin_unlock_irqrestore(&rtc_lock, flags);
> > +        break;
> 
>     default:
>         ASSERT_UNREACHABLE();
>         break;
> 
> ?

Sure.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.