[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 02/11] vpci: Add hooks for PCI device assign/de-assign
On 30.09.21 13:14, Jan Beulich wrote: > On 30.09.2021 11:21, Oleksandr Andrushchenko wrote: >> On 30.09.21 12:06, Jan Beulich wrote: >>> On 30.09.2021 10:45, Oleksandr Andrushchenko wrote: >>>> On 30.09.21 11:21, Jan Beulich wrote: >>>>> On 30.09.2021 09:52, Oleksandr Andrushchenko wrote: >>>>>> @@ -1429,6 +1433,11 @@ static int assign_device(struct domain *d, u16 >>>>>> seg, u8 bus, u8 devfn, u32 flag) >>>>>> rc = hd->platform_ops->assign_device(d, devfn, >>>>>> pci_to_dev(pdev), flag); >>>>>> } >>>>>> >>>>>> + if ( rc ) >>>>>> + goto done; >>>>> From all I can tell this is dead code. >>>> Before the change rc was set in the loop. And then we fall through >>>> to the "done" label. I do agree that the way this code is done the >>>> value of that rc will only reflect the last assignment done in the loop, >>>> but with my change I didn't want to change the existing behavior, >>>> thus "if ( rc" >>> rc is always 0 upon loop exit, afaict: >>> >>> for ( ; pdev->phantom_stride; rc = 0 ) >>> >>> Granted this is unusual and hence possibly unexpected. >> I will remove that check then. Do we want a comment about rc == 0, >> so it is seen why there is no check for rc? > So far we've been doing fine without such a comment, but I wouldn't > object to a well worded one getting added. > >>>>>> --- a/xen/drivers/vpci/vpci.c >>>>>> +++ b/xen/drivers/vpci/vpci.c >>>>>> @@ -86,6 +86,29 @@ int __hwdom_init vpci_add_handlers(struct pci_dev >>>>>> *pdev) >>>>>> >>>>>> return rc; >>>>>> } >>>>>> + >>>>>> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT >>>>>> +/* Notify vPCI that device is assigned to guest. */ >>>>>> +int vpci_assign_device(struct domain *d, const struct pci_dev *dev) >>>>>> +{ >>>>>> + /* It only makes sense to assign for hwdom or guest domain. */ >>>>> Could you clarify for me in how far this code path is indeed intended >>>>> to be taken by hwdom? Because if it is, I'd like to further understand >>>>> the interaction with setup_hwdom_pci_devices(). >>>> setup_hwdom_pci_devices is not used on Arm as we do rely on >>>> Dom0 to perform PCI host initialization and PCI device enumeration. >>>> >>>> This is because of the fact that on Arm it is not a trivial task to >>>> initialize a PCI host bridge in Xen, e.g. you need to properly initialize >>>> power domains, clocks, quirks etc. for different SoCs. >>>> All these make the task too complex and it was decided that at the >>>> moment we do not want to bring PCI device drivers in Xen for that. >>>> It was also decided that we expect Dom0 to take care of initialization >>>> and enumeration. >>>> Some day, when firmware can do PCI initialization for us and then we >>>> can easily access ECAM, this will change. Then setup_hwdom_pci_devices >>>> will be used on Arm as well. >>>> >>>> Thus, we need to take care that Xen knows about the discovered >>>> PCI devices via assign_device etc. >>> Fair enough, but since I've not spotted a patch expressing this >> Well, it is all in the RFC for PCI passthrough on Arm which is mentioned >> in series from Arm and EPAM (part 2). I didn't mention the RFC in the >> cover letter for this series though. >>> (by >>> adding suitable conditionals), may I ask that you do so in yet another >>> patch (unless I've overlooked where this gets done)? >> Could you please elaborate more on which conditionals you are >> talking about here? I'm afraid I didn't understand this part. > By putting it inside #if or adding "if ( !IS_ENABLED() )", you'd make > very obvious that the code in question isn't used, and hence no > interaction issues with vPCI exist. > >>>>>> + if ( is_system_domain(d) || !has_vpci(d) ) >>>>>> + return 0; >>>>>> + >>>>>> + return 0; >>>>>> +} >>>>>> + >>>>>> +/* Notify vPCI that device is de-assigned from guest. */ >>>>>> +int vpci_deassign_device(struct domain *d, const struct pci_dev *dev) >>>>>> +{ >>>>>> + /* It only makes sense to de-assign from hwdom or guest domain. */ >>>>>> + if ( is_system_domain(d) || !has_vpci(d) ) >>>>>> + return 0; >>>>>> + >>>>>> + return 0; >>>>>> +} >>>>>> +#endif /* CONFIG_HAS_VPCI_GUEST_SUPPORT */ >>>>> At this point of the series #ifdef is the less preferable variant of >>>>> arranging for dead code to get compiled out. >>>> What is that other preferable way then? >>> "if ( !IS_ENABLED() )" as I did already point out to you yesterday in >>> reply to v2 of patch 10 of this very series. >> Please see below >>>>> I expect later patches >>>>> will change that? >>>> No, it is going to be this way all the time >>> The question wasn't whether you switch away from the #ifdef-s, but >>> whether later patches leave that as the only choice (avoiding build >>> breakage). >> Yes, the code is going to always remain ifdef'ed, so we don't have >> dead code for x86 (at least). >> So, does the above mean that you are ok with "#ifdef >> CONFIG_HAS_VPCI_GUEST_SUPPORT" >> and there is no need for "if ( !IS_ENABLED() )"? > I'm afraid you still didn't understand: "if ( !IS_ENABLED() )" is > also a way to make sure there's (almost) no dead code. And this model > has the advantage that the compiler would still check all that code > even in x86 builds (throwing away most of it in one of its DCE passes), > reducing the risk for someone not routinely doing Arm builds to > introduce a build issue. > > But as soon a code references struct members which sit inside an > #ifdef, that code can't use this preferred approach anymore. That's > what I suspect might be happening in subsequent patches, which would > then justify your choice of #ifdef. This is the key to my not understanding: indeed, there are structure members which are ifdef'ed thus rendering the idea with IS_ENABLED not applicable: @@ -444,6 +444,14 @@ struct domain +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT + struct list_head vdev_list; + /* + * Current device number used by the virtual PCI bus topology + * to assign a unique SBDF to a passed through virtual PCI device. + */ + int vpci_dev_next; +#endif > > Jan > > Thank you, Oleksandr
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |