[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 3/4] xen/pci: Move x86 specific code to x86 directory.


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Rahul Singh <Rahul.Singh@xxxxxxx>
  • Date: Wed, 28 Oct 2020 15:20:58 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EziYhGqbg+zTATj1oJDLW1nY9c62Yt5ufgVzeSdXRKE=; b=VZ1xc80edmzoEKBNOaX43EOIZRDwSKIDijVX4u6Sga8dwD21JlQa2WUQRbZevhDeh2rkP66UXYhpTrBjkOCkkczESOUbJHue9FKuPA1KovaDwKrSRD+ZT1iDLJax56pv7Q7Q7GeJZ9lJ2V6xRewMW7KBC/KRS5PPNN+WGM77rrfvjipwNL9Eg5tLjZy8X/hwKzXVkVUirz8eDnkkDR4wvQ8sonIJRq/p0DaQEoip885kZ4OybTbyZ6MvLzQjStiQX8SI6rS8lC9xLGgZXAdVrileGx09oc0LXkCVwOP2yVlWiLQZker2d0otcU9xUnh92XRArLTMh8K+6E2ko5MW1g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OAncU0MiK5u3dHViUWKF+U3c1caZapZK0OAZfpG61zTVnQdGvmfJ0W2ez21nf/4jIWxo5FlsUiW0VivbnNsl37HL7KcyfWyG/ex/EYOMY851C5ZCWyhVRDRJYSlsidBQnbj9qHHGBmKXX099p6iFcmVERN8E1MQaPYpEEP2+zD5fdAiYx5G1s9kPKo6p6RoFwUgDXJHNYFAekAhNpPvB3lcRb09dAaMRHcxwOA+R62WsaWh5WTKWTmsH6e45YOXTSOfc5OfEXM8X9zBsrnEm/SbXl/uNlh4Ed8LqqSfoeQvyKHOkXR3GIQbbUVGn88RT1d0ND3lI3zZ3sm604eK+LA==
  • Authentication-results-original: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Paul Durrant <paul@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 28 Oct 2020 15:21:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHWq7x5RAJ4utEmJEq01xijjJpnmqms6fuAgAA6aoA=
  • Thread-topic: [PATCH v1 3/4] xen/pci: Move x86 specific code to x86 directory.

Hello Jan,

> On 28 Oct 2020, at 11:51 am, Jan Beulich <jbeulich@xxxxxxxx> wrote:
> 
> On 26.10.2020 18:17, Rahul Singh wrote:
>> passthrough/pci.c file is common for all architecture, but there is x86
>> sepcific code in this file.
> 
> The code you move doesn't look to be x86 specific in the sense that
> it makes no sense on other architectures, but just because certain
> pieces are missing on Arm. With this I question whether is it really
> appropriate to move this code. I do realize that in similar earlier
> cases my questioning was mostly ignored ...
> 
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/x86/pci.c
>> @@ -0,0 +1,97 @@
>> +/*
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along 
>> with
>> + * this program; If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <xen/param.h>
>> +#include <xen/sched.h>
>> +#include <xen/pci.h>
>> +#include <xen/pci_regs.h>
>> +
>> +static int pci_clean_dpci_irq(struct domain *d,
>> +                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
>> +{
>> +    struct dev_intx_gsi_link *digl, *tmp;
>> +
>> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
>> +
>> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
>> +        kill_timer(&pirq_dpci->timer);
>> +
>> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
>> +    {
>> +        list_del(&digl->list);
>> +        xfree(digl);
>> +    }
>> +
>> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
>> +
>> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
>> +        return 0;
>> +
>> +    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
>> +
>> +    return -ERESTART;
>> +}
>> +
>> +static int pci_clean_dpci_irqs(struct domain *d)
>> +{
>> +    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
>> +
>> +    if ( !is_iommu_enabled(d) )
>> +        return 0;
>> +
>> +    if ( !is_hvm_domain(d) )
>> +        return 0;
>> +
>> +    spin_lock(&d->event_lock);
>> +    hvm_irq_dpci = domain_get_irq_dpci(d);
>> +    if ( hvm_irq_dpci != NULL )
>> +    {
>> +        int ret = 0;
>> +
>> +        if ( hvm_irq_dpci->pending_pirq_dpci )
>> +        {
>> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
>> +                 ret = -ERESTART;
>> +            else
>> +                 hvm_irq_dpci->pending_pirq_dpci = NULL;
>> +        }
>> +
>> +        if ( !ret )
>> +            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
>> +        if ( ret )
>> +        {
>> +            spin_unlock(&d->event_lock);
>> +            return ret;
>> +        }
>> +
>> +        hvm_domain_irq(d)->dpci = NULL;
>> +        free_hvm_irq_dpci(hvm_irq_dpci);
>> +    }
>> +    spin_unlock(&d->event_lock);
>> +    return 0;
> 
> While moving please add the missing blank line before the main
> return statement of the function.

Ok I will fix that in next version.
> 
>> +}
>> +
>> +int arch_pci_release_devices(struct domain *d)
>> +{
>> +    return pci_clean_dpci_irqs(d);
>> +}
> 
> Why the extra function layer?

Is that ok if I rename the function pci_clean_dpci_irqs() to 
arch_pci_clean_pirqs() ?

> 
> Jan
> 

Regards,
Rahul




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.