[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/vPIC: register only one ELCR handler instance


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 30 May 2023 11:08:07 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3dlhsIQp2huyve/jcOpGQpBIbgHGAj6O4wLLauBNjxw=; b=jS3z+WST/V+YMSPHjP0+zh6aHXylW5ErS2lW/Ip1xyk9T3pxFyFvYyR5hY5SZa8wHr3zrXtlYxBBc/Sn/O+rGf85+pODwigBBr5v+TxsiyDVS0oMoTe8QowMoq75pe0rRhn2WEcWDyDkYuhctL4PndQzXVKdCd4Hkt4hSAUcRhk6JZe3GqC9ZDOPFuYmllC/7dNG9mmT/zY9KC+eZJqNmbrMqJzltBVvcZx74qWbdXOYTfjjnAFcC98DY3dPFMNL2s3nSuqJjbdUBeQEqBbv5VFrrMrzn9ou6uiPGA55qR06YkVFZi1ZZ9xXrzJkduWw56rJCsB6w8CCpz+K5OqJtA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YDZe7qk6KcRvl+vUGBXaOJ9+8hAD2P+y6IEuXQmr9Lw6GkAPK6mqIaSBldHUbkADLDSNCz1eluPYTHJ9s62HQ0Jho8rnvUFoympgMSBNzPyzNrSHJHLYX7jhFMYafmVK2TJI9549IijBvqJj6QCIyh4q95mV4E4VxDoz7IKcibjrDcBgjcQHJSMMoZ/nf+68SMjAKcFRz8MRCSoZ+QDlHWZL0zk/1RekKSp8MCI8q92hhjYIv4z2BR/+HJYN/YEBYyPzX3EPgIsyYWikan+KeP57Ni0/l+3FO8z/XTUKXbt9Fivi3T6zkUzEY9qsIedJ2OpenQE0jVdTKwLtEUi00w==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Tue, 30 May 2023 09:08:25 +0000
  • Ironport-data: A9a23:0TUkx6qAXKV++wJaL6CI6BSbsAVeBmI+ZBIvgKrLsJaIsI4StFCzt garIBmBMvzbZWbyLt10bNm38htTvcKEyoNgTwVqqC9hQypDo5uZCYyVIHmrMnLJJKUvbq7FA +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzCFNVfrzGInqR5fGatgMdgKFb 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay RAXACo2STfdiPus+qC6GsB2pZ8Kb+TGIapK7xmMzRmBZRonabbqZvyQoPN9gnI3jM0IGuvCb c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jemraYWMEjCJbZw9ckKwv GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdtKS+3grK4z6LGV7kEuCQczexydm6WarUf5B85+C kVTxSV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62 1mRhdTrCDdz9rqPRhq16bO8vT60fy8PIgc/iTQsSAIE55zvpd81hxeWFNJ7Svfq15vyBC36x C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLztJ6s8K7xooG9g UU5
  • Ironport-hdrordr: A9a23:UN7r8qrKR2vYBh1Xl4xbKO8aV5tMLNV00zEX/kB9WHVpm5Oj+v xGzc5w6farsl0ssREb9uxo9pPwJE800aQFmbX5Wo3SJzUO2VHYVb2KiLGP/9SOIU3DH4JmpM Rdmu1FeafN5DtB/LnHCWuDYrEdKbC8mcjH5Ns2jU0dKz2CA5sQkzuRYTzrdnGeKjM2Z6bQQ/ Gnl7d6TnebCD0qhoPRPAh3Y8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzSIwxsEVDtL4LE6tU zIiRbw6KmPu+yyjka07R6f071m3P/ajvdTDs2FjcYYbh3qlwaTfYxkH5GSoTwvp+mryVAy1P 3BuQ0pMchf427YOku1vRzu8Q/91ytG0Q6p9XaoxV/Y5eDpTjMzDMRMwapfbxvi8kIl+PVxyr hC0W61v4deSUqoplW32/H4EzVR0makq3srluAey1RZTIslcbdU6agS5llcHpssFD/zrKonDO 5tJsfB4+s+SyLTU1np+k1UhPC8VHU6GRmLBmAEp8yuyjBT2Et0ykMJrfZv6ksoxdYYcd1p9u 7EOqNnmPVlVckNd59wA+8HXI+eFnHNaQikChPSHX3XUIU8f17doZ/+57s4oMuwfoYT8Zc0kJ PdFHtFqG8JfV70A8Hm5uwEzvn0ehT/Yd3R8LAd23Ag0YeMAYYDcBfzB2zGqvHQ48n2WabgKr KO0JE/OY6XEYKhI/cP4+TEYegjFZAvarxqhj8FYSP+nivqEPycigWJSoekGJPdVRAZZ0jYPl wvGBDOGeQo1DHYZpa/ummcZ0/Q
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, May 30, 2023 at 10:48:02AM +0200, Jan Beulich wrote:
> On 29.05.2023 10:39, Roger Pau Monné wrote:
> > On Fri, May 26, 2023 at 09:35:04AM +0200, Jan Beulich wrote:
> >> There's no point consuming two port-I/O slots. Even less so considering
> >> that some real hardware permits both ports to be accessed in one go,
> >> emulating of which requires there to be only a single instance.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >>
> >> --- a/xen/arch/x86/hvm/vpic.c
> >> +++ b/xen/arch/x86/hvm/vpic.c
> >> @@ -377,25 +377,34 @@ static int cf_check vpic_intercept_elcr_
> >>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
> >>  {
> >>      struct hvm_hw_vpic *vpic;
> >> -    uint32_t data;
> >> +    unsigned int data, shift = 0;
> >>  
> >> -    BUG_ON(bytes != 1);
> >> +    BUG_ON(bytes > 2 - (port & 1));
> >>  
> >>      vpic = &current->domain->arch.hvm.vpic[port & 1];
> >>  
> >> -    if ( dir == IOREQ_WRITE )
> >> -    {
> >> -        /* Some IRs are always edge trig. Slave IR is always level trig. 
> >> */
> >> -        data = *val & vpic_elcr_mask(vpic);
> >> -        if ( vpic->is_master )
> >> -            data |= 1 << 2;
> >> -        vpic->elcr = data;
> >> -    }
> >> -    else
> >> -    {
> >> -        /* Reader should not see hardcoded level-triggered slave IR. */
> >> -        *val = vpic->elcr & vpic_elcr_mask(vpic);
> >> -    }
> >> +    do {
> >> +        if ( dir == IOREQ_WRITE )
> >> +        {
> >> +            /* Some IRs are always edge trig. Slave IR is always level 
> >> trig. */
> >> +            data = (*val >> shift) & vpic_elcr_mask(vpic);
> >> +            if ( vpic->is_master )
> >> +                data |= 1 << 2;
> > 
> > Not that you added this, but I'm confused.  The spec I'm reading
> > explicitly states that bits 0:2 are reserved and must be 0.
> > 
> > Is this some quirk of the specific chipset we aim to emulate?
> 
> I don't think so. Note that upon reads the bit is masked out again.
> Adding back further context, there's even a comment to this effect:
> 
> +        else
> +        {
> +            /* Reader should not see hardcoded level-triggered slave IR. */
> +            data = vpic->elcr & vpic_elcr_mask(vpic);
> 
> The setting of the bit is solely for internal handling purposes,
> aiui.

Oh, I see, I should have paid more attention to the "Slave IR is
always level trig.", might have been helpful if this was noted as an
internal implementation detail.

Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.