[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/vPIC: register only one ELCR handler instance


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 30 May 2023 10:48:02 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fyntQ9/Hwrs8GyNAUspHduLZgouzXOk3CPacRdLAasM=; b=lvKsL/xeYvoB9WC2Ij4fLuuQkijtVwoJY0NK58eQ2OcOMv8jwjh9Vlnj7tbOgNF9zc9qpgF6Tz7ofwgAATlRuHcttv6pGUQIm/Q9OEnkwluEFIT1dsVG5LJwUAzhbpEHk4GJodzD8kbiNY3yx+KHIx+gorAb/fSfS/THOGxcPVRaELrIc9So8o5FXR8oRxg5FZ0mO5KBDgQHHs6dtIay9c94UKevwiZZKJLmapQPaHjRKPATbkdwvup4P25R6+G90jbDpPnt8RcHZ9lNxHiAV659CVbMMCG6a4vtUV6CLiMBnyU5T2TNrZ9EAfCF+3C1zAuTMSZ5WjqUG5xVAZAknA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dX5PfQYFXAH5khAZDP5pnVUg/sTMTBLRy7bccOa5q8pvVygLI3fYCU3DJ+yocfeUSpABHQpBm6gR0SFg55ToYsVO6x8L/5urPcbkGxZEzdTBU6nqATHUnVErvc9RW77k8d4cu5Tnw4QWNLALyfsNn1sM2yTlvB+eHALuGO0zUeKqbcbsa82lK+EYjg/8Ip7pSiYVUZjMPNFMKKFEYefNIFRaon5PWYR6ba9mrQVpjKLZ3O3CxWJej3vyGIhEAQPO09x+8ISY9t0PQv5NdQiXhOVavIW6z08UzDHBCLxoyqXVj4dKPEwvugDuKt4Qkr2JjAss4Ky4qeOpIabBDl59Fg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Tue, 30 May 2023 08:48:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 29.05.2023 10:39, Roger Pau Monné wrote:
> On Fri, May 26, 2023 at 09:35:04AM +0200, Jan Beulich wrote:
>> There's no point consuming two port-I/O slots. Even less so considering
>> that some real hardware permits both ports to be accessed in one go,
>> emulating of which requires there to be only a single instance.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>
>> --- a/xen/arch/x86/hvm/vpic.c
>> +++ b/xen/arch/x86/hvm/vpic.c
>> @@ -377,25 +377,34 @@ static int cf_check vpic_intercept_elcr_
>>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>>  {
>>      struct hvm_hw_vpic *vpic;
>> -    uint32_t data;
>> +    unsigned int data, shift = 0;
>>  
>> -    BUG_ON(bytes != 1);
>> +    BUG_ON(bytes > 2 - (port & 1));
>>  
>>      vpic = &current->domain->arch.hvm.vpic[port & 1];
>>  
>> -    if ( dir == IOREQ_WRITE )
>> -    {
>> -        /* Some IRs are always edge trig. Slave IR is always level trig. */
>> -        data = *val & vpic_elcr_mask(vpic);
>> -        if ( vpic->is_master )
>> -            data |= 1 << 2;
>> -        vpic->elcr = data;
>> -    }
>> -    else
>> -    {
>> -        /* Reader should not see hardcoded level-triggered slave IR. */
>> -        *val = vpic->elcr & vpic_elcr_mask(vpic);
>> -    }
>> +    do {
>> +        if ( dir == IOREQ_WRITE )
>> +        {
>> +            /* Some IRs are always edge trig. Slave IR is always level 
>> trig. */
>> +            data = (*val >> shift) & vpic_elcr_mask(vpic);
>> +            if ( vpic->is_master )
>> +                data |= 1 << 2;
> 
> Not that you added this, but I'm confused.  The spec I'm reading
> explicitly states that bits 0:2 are reserved and must be 0.
> 
> Is this some quirk of the specific chipset we aim to emulate?

I don't think so. Note that upon reads the bit is masked out again.
Adding back further context, there's even a comment to this effect:

+        else
+        {
+            /* Reader should not see hardcoded level-triggered slave IR. */
+            data = vpic->elcr & vpic_elcr_mask(vpic);

The setting of the bit is solely for internal handling purposes,
aiui.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.