[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/3] xen/vpci: msix: move x86 specific code to x86 file


  • To: Rahul Singh <Rahul.Singh@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 1 Mar 2022 14:55:04 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QJMu4YL5jAQhiUJ7KcDsX49vB3plU9GmSJbM8enSqx4=; b=NGJKWjS6F/ROmPhoP7yDgPDt9szCBljPzJptFsStbpM8O25L81aeuOxJx5FaDHLlaOJB2N1nJF83vbs0TS6V9S+IMmPfE0M15gr/jXWFKqqst0mOOY0ClflM817rClV20YsBqyk7b0MV9Ev8VsIPfVtSYqIvr86y5l3NA48Lapv58CbVqJZooaLTz/i58gYITQrjDE/KywD8l1peYYRF18c5BUZ4inBv3OdQx6cgTVjABUovUE8ChIEZOLa4kv4gRce1wclYDjt0pe+CW6eGBnKA72HHoG+/8V4KMCGp//3vzFOnUSTbVewdeKuDVTFjZ3i5ubeVHb8jg0K7W9wnFw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YZU1seUnjKSy0/eEwLMOBtlemWGhU0vV/tNMnJXDWlyjY0JcYGULLCxQMul3ozReqMfl5YcGHXpmLtsKMBY5nbIUdbOT79VKto/UscRtj4DT+acLktmTCp5T4u4+WtwC4XF49i+csZEQvBMDtETcljY3YYW/H0Bj8EGF9WQll/4AJx7Uhs1KJ2kMJWtsLChf204RFdiBoSeP6Slt3rFQTjcXaEHm7AVBO7WQDo+lSIDuvyjds3+NCoMqq7wKJDeSvFHCOQ4UJFQVoY0mDw0JAZqFHjzyaH+sktIRLqbJbhvAactk6a/Ha32LJ//XpkRltUIeby5+LJpLhgKmxfs5fg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 01 Mar 2022 13:55:17 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 01.03.2022 14:34, Rahul Singh wrote:
>> On 24 Feb 2022, at 2:57 pm, Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>
>> On 15.02.2022 16:25, Rahul Singh wrote:
>>> vpci/msix.c file will be used for arm architecture when vpci msix
>>> support will be added to ARM, but there is x86 specific code in this
>>> file.
>>>
>>> Move x86 specific code to the x86/hvm/vmsi.c file to make sure common
>>> code will be used for other architecture.
>>
>> Could you provide some criteria by which code is considered x86-specific
>> (or not)? For example ...
> 
> Code moved to x86 file is based on criteria that either the code will be 
> unusable or will be different 
> for ARM when MSIX  support will be introduce for ARM.

That's a very abstract statement, which you can't really derive any
judgement from. It'll be necessary to see in how far the code is
indeed different. After all PCI, MSI, and MSI-X are largely arch-
agnostic.

>>> --- a/xen/arch/x86/hvm/vmsi.c
>>> +++ b/xen/arch/x86/hvm/vmsi.c
>>> @@ -925,4 +925,106 @@ int vpci_msix_arch_print(const struct vpci_msix *msix)
>>>
>>>     return 0;
>>> }
>>> +
>>> +int vpci_make_msix_hole(const struct pci_dev *pdev)
>>> +{
>>> +    struct domain *d = pdev->domain;
>>> +    unsigned int i;
>>> +
>>> +    if ( !pdev->vpci->msix )
>>> +        return 0;
>>> +
>>> +    /* Make sure there's a hole for the MSIX table/PBA in the p2m. */
>>> +    for ( i = 0; i < ARRAY_SIZE(pdev->vpci->msix->tables); i++ )
>>> +    {
>>> +        unsigned long start = PFN_DOWN(vmsix_table_addr(pdev->vpci, i));
>>> +        unsigned long end = PFN_DOWN(vmsix_table_addr(pdev->vpci, i) +
>>> +                                     vmsix_table_size(pdev->vpci, i) - 1);
>>> +
>>> +        for ( ; start <= end; start++ )
>>> +        {
>>> +            p2m_type_t t;
>>> +            mfn_t mfn = get_gfn_query(d, start, &t);
>>> +
>>> +            switch ( t )
>>> +            {
>>> +            case p2m_mmio_dm:
>>> +            case p2m_invalid:
>>> +                break;
>>> +            case p2m_mmio_direct:
>>> +                if ( mfn_x(mfn) == start )
>>> +                {
>>> +                    clear_identity_p2m_entry(d, start);
>>> +                    break;
>>> +                }
>>> +                /* fallthrough. */
>>> +            default:
>>> +                put_gfn(d, start);
>>> +                gprintk(XENLOG_WARNING,
>>> +                        "%pp: existing mapping (mfn: %" PRI_mfn
>>> +                        "type: %d) at %#lx clobbers MSIX MMIO area\n",
>>> +                        &pdev->sbdf, mfn_x(mfn), t, start);
>>> +                return -EEXIST;
>>> +            }
>>> +            put_gfn(d, start);
>>> +        }
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>>
>> ... nothing in this function looks to be x86-specific, except maybe
>> functions like clear_identity_p2m_entry() may not currently be available
>> on Arm. But this doesn't make the code x86-specific.
> 
> I will maybe be wrong but what I understand from the code is that for x86 
> if there is no p2m entries setup for the region, accesses to them will be 
> trapped 
> into the hypervisor and can be handled by specific MMIO handler.
> 
> But for ARM when we are registering the MMIO handler we have to provide 
> the GPA also for the MMIO handler. 

Question is: Is this just an effect resulting from different implementation,
or an inherent requirement? In the former case, harmonizing things may be an
alternative option.

> For ARM arch vpci_make_msix_hole() will be used to register the MMIO handler 
> for the MSIX MMIO region.
> 
> int vpci_make_msix_hole(const struct pci_dev *pdev)
> {
>     struct vpci_msix *msix = pdev->vpci->msix;
>     paddr_t addr,size;
> 
>    for ( int i = 0; msix && i < ARRAY_SIZE(msix->tables); i++ )
>    {                                                                          
>  
>        addr = vmsix_table_addr(pdev->vpci, i);               
>        size = vmsix_table_size(pdev->vpci, i) - 1;                            
>                                              
>        register_mmio_handler(pdev->domain, &vpci_msi_mmio_handler,            
>  
>                                               addr, size, NULL);              
>                   
>     }                                                                         
>                                         
>     return 0;                                                                 
>   
> }
> 
> Therefore in this case there is difference how ARM handle this case.
>  
>>
>>> +struct vpci_msix *vpci_msix_find(const struct domain *d, unsigned long 
>>> addr)
>>> +{
>>> +    struct vpci_msix *msix;
>>> +
>>> +    list_for_each_entry ( msix, &d->arch.hvm.msix_tables, next )
>>> +    {
>>> +        const struct vpci_bar *bars = msix->pdev->vpci->header.bars;
>>> +        unsigned int i;
>>> +
>>> +        for ( i = 0; i < ARRAY_SIZE(msix->tables); i++ )
>>> +            if ( bars[msix->tables[i] & PCI_MSIX_BIRMASK].enabled &&
>>> +                 VMSIX_ADDR_IN_RANGE(addr, msix->pdev->vpci, i) )
>>> +                return msix;
>>> +    }
>>> +
>>> +    return NULL;
>>> +}
>>
>> Or take this one - I don't see anything x86-specific in here. The use
>> of d->arch.hvm merely points out that there may be a field which now
>> needs generalizing.
> 
> Yes, you are right here I can avoid this change if I will introduce 
> "struct list_head msix_tables"  in "d->arch.hvm" for ARM also. 

Wait - if you pass in the guest address at registration time, you
shouldn't have a need for a "find" function.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.