|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 2/8] vpci/header: Emulate legacy capability list for host
On Tue, Apr 15, 2025 at 10:07:14AM +0000, Chen, Jiqian wrote:
> On 2025/4/15 17:25, Roger Pau Monné wrote:
> > On Wed, Apr 09, 2025 at 02:45:22PM +0800, Jiqian Chen wrote:
> >> +static int vpci_init_capability_list(struct pci_dev *pdev)
> >> +{
> >> + int rc;
> >> + bool mask_cap_list = false;
> >> + bool is_hwdom = is_hardware_domain(pdev->domain);
> >> + const unsigned int *caps = is_hwdom ? NULL : guest_supported_caps;
> >> + const unsigned int n = is_hwdom ? 0 :
> >> ARRAY_SIZE(guest_supported_caps);
> >> +
> >> + if ( pci_conf_read16(pdev->sbdf, PCI_STATUS) & PCI_STATUS_CAP_LIST )
> >> + {
> >> + unsigned int next, ttl = 48;
> >> +
> >> + next = pci_find_next_cap_ttl(pdev->sbdf, PCI_CAPABILITY_LIST,
> >> + caps, n, &ttl);
> >> +
> >> + rc = vpci_add_register(pdev->vpci, vpci_read_val, NULL,
> >> + PCI_CAPABILITY_LIST, 1,
> >> + (void *)(uintptr_t)next);
> >> + if ( rc )
> >> + return rc;
> >> +
> >> + next &= ~3;
> >> +
> >> + if ( !next && !is_hwdom )
> >> + /*
> >> + * If we don't have any supported capabilities to expose to
> >> the
> >> + * guest, mask the PCI_STATUS_CAP_LIST bit in the status
> >> register.
> >> + */
> >> + mask_cap_list = true;
> >> +
> >> + while ( next && ttl )
> >> + {
> >> + unsigned int pos = next;
> >> +
> >> + next = pci_find_next_cap_ttl(pdev->sbdf, pos +
> >> PCI_CAP_LIST_NEXT,
> >> + caps, n, &ttl);
> >> +
> >> + rc = vpci_add_register(pdev->vpci, vpci_hw_read8, NULL,
> >> + pos + PCI_CAP_LIST_ID, 1, NULL);
> >
> > There's no need to add this handler for the hardware domain, that's
> > already the default behavior in that case.
> But if not, I have no handler to remove from capability list in next patch's
> hiding function vpci_capability_mask(),
> then I can't success to hide it.
Oh, I see, I have further comments on that approach, see the comments
on the followup patches.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |