[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] vpci/msix: exit early if MSI-X is disabled
On 02.12.2020 09:38, Jan Beulich wrote: > On 01.12.2020 18:40, Roger Pau Monne wrote: >> --- a/xen/drivers/vpci/msix.c >> +++ b/xen/drivers/vpci/msix.c >> @@ -357,7 +357,11 @@ static int msix_write(struct vcpu *v, unsigned long >> addr, unsigned int len, >> * so that it picks the new state. >> */ >> entry->masked = new_masked; >> - if ( !new_masked && msix->enabled && !msix->masked && >> entry->updated ) >> + >> + if ( !msix->enabled ) >> + break; >> + >> + if ( !new_masked && !msix->masked && entry->updated ) >> { >> /* >> * If MSI-X is enabled, the function mask is not active, the >> entry > > What about a "disabled" -> "enabled-but-masked" transition? This, > afaict, similarly won't trigger setting up of entries from > control_write(), and hence I'd expect the ASSERT() to similarly > trigger when subsequently an entry's mask bit gets altered. > > I'd also be fine making this further adjustment, if you agree, > but the one thing I haven't been able to fully convince myself of > is that there's then still no need to set ->updated to true. I've taken another look. I think setting ->updated (or something equivalent) is needed in that case, in order to not lose the setting of the entry mask bit. However, this would only defer the problem to control_write(): This would now need to call vpci_msix_arch_mask_entry() under suitable conditions, but avoid calling it when the entry is disabled or was never set up. No matter whether making the setting of ->updated conditional, or adding a conditional call in update_entry(), we'd need to evaluate whether the entry is currently disabled. Imo, instead of introducing a new arch hook for this, it's easier to make vpci_msix_arch_mask_entry() tolerate getting called on a disabled entry. Below my proposed alternative change. While writing the description I started wondering why we require address or data fields to have got written before the first unmask. I don't think the hardware imposes such a requirement; zeros would be used instead, whatever this means. Let's not forget that it's only the primary purpose of MSI/MSI-X to trigger interrupts. Forcing the writes to go elsewhere in memory is not forbidden from all I know, and could be used by a driver. IOW I think ->updated should start out as set to true. But of course vpci_msi_update() then would need to check the upper address bits and avoid setting up an interrupt if they're not 0xfee. And further arrangements would be needed to have the guest requested write actually get carried out correctly. Jan x86/vPCI: tolerate (un)masking a disabled MSI-X entry None of the four reasons causing vpci_msix_arch_mask_entry() to get called (there's just a single call site) are impossible or illegal prior to an entry actually having got set up: - the entry may remain masked (in this case, however, a prior masked -> unmasked transition would already not have worked), - MSI-X may not be enabled, - the global mask bit may be set, - the entry may not otherwise have been updated. Hence the function asserting that the entry was previously set up was simply wrong. Since the caller tracks the masked state (and setting up of an entry would only be effected when that software bit is clear), it's okay to skip both masking and unmasking requests in this case. Fixes: d6281be9d0145 ('vpci/msix: add MSI-X handlers') Reported-by: Manuel Bouyer <bouyer@xxxxxxxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- a/xen/arch/x86/hvm/vmsi.c +++ b/xen/arch/x86/hvm/vmsi.c @@ -840,8 +840,8 @@ void vpci_msi_arch_print(const struct vp void vpci_msix_arch_mask_entry(struct vpci_msix_entry *entry, const struct pci_dev *pdev, bool mask) { - ASSERT(entry->arch.pirq != INVALID_PIRQ); - vpci_mask_pirq(pdev->domain, entry->arch.pirq, mask); + if ( entry->arch.pirq != INVALID_PIRQ ) + vpci_mask_pirq(pdev->domain, entry->arch.pirq, mask); } int vpci_msix_arch_enable_entry(struct vpci_msix_entry *entry,
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |