[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 6/9] xen/vpci: add handlers to map the BARs
>>> Roger Pau Monne <roger.pau@xxxxxxxxxx> 07/24/17 4:58 PM >>> >On Fri, Jul 14, 2017 at 09:11:29AM -0600, Jan Beulich wrote: >> >>> On 30.06.17 at 17:01, <roger.pau@xxxxxxxxxx> wrote: >> > + list_for_each_entry(pdev, &d->arch.pdev_list, domain_list) >> > + { >> > + uint16_t cmd = pci_conf_read16(pdev->seg, pdev->bus, >> > + PCI_SLOT(pdev->devfn), >> > + PCI_FUNC(pdev->devfn), >> > + PCI_COMMAND); >> >> This is quite a lot of overhead - a loop over all devices plus a config >> space read on each one. What state the memory decode bit is in >> could be recorded in the ->enabled flag, couldn't it? And devices on >> different sub-branches of the topology can't possibly have >> overlapping entries that we need to worry about, as the bridge >> windows would suppress actual accesses. > >Oh, so Xen only needs to care about devices that share the same >bridge, because that is the only case where the same page can be >shared by multiple devices? Yes, that's my understanding (unless bridge windows overlap, which I don't know what the resulting behavior would be). >In any case, the Dom0 is free to wrongly position the BARs anywhere it >wants, thus possibly placing them outside of the bridge windows, in >with case I think we should better check all assigned devices. As an initial solution this _may_ be good enough, but beware of systems with very many devices. >> > +static void vpci_bar_write(struct pci_dev *pdev, unsigned int reg, >> > + union vpci_val val, void *data) >> > +{ >> > + struct vpci_bar *bar = data; >> > + uint8_t seg = pdev->seg, bus = pdev->bus; >> > + uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn); >> > + uint32_t wdata = val.u32, size_mask; >> > + bool hi = false; >> > + >> > + switch ( bar->type ) >> > + { >> > + case VPCI_BAR_MEM32: >> > + case VPCI_BAR_MEM64_LO: >> > + size_mask = (uint32_t)PCI_BASE_ADDRESS_MEM_MASK; >> > + break; >> > + case VPCI_BAR_MEM64_HI: >> > + size_mask = ~0u; >> > + break; >> > + default: >> > + ASSERT_UNREACHABLE(); >> > + return; >> > + } >> > + >> > + if ( (wdata & size_mask) == size_mask ) >> > + { >> > + /* Next reads from this register are going to return the BAR >> > size. */ >> > + bar->sizing = true; >> > + return; >> >> I think the comment needs extending to explain why the written >> sizing value can't possibly be an address. This is particularly >> relevant because I'm not sure that assumption would hold on e.g. >> ARM (which I don't think has guaranteed ROM right below 4Gb). > >Hm, right. Maybe it would be best to detect sizing by checking that >the address when performing a read is ~0 on the high bits and ~0 & >PCI_BASE_ADDRESS_MEM_MASK on the lower ones, instead of doing this >kind of partial guessing as done here, it's certainly not very robust. I don't understand, particularly because you say "when performing a read). Or do you mean to do away with the "sizing" flag altogether? >> > + /* Size the BAR and map it. */ >> > + rc = pci_size_mem_bar(seg, bus, slot, func, reg, i == num_bars - >> > 1, >> > + &addr, &size); >> > + if ( rc < 0 ) >> > + return rc; >> > + >> > + if ( size == 0 ) >> > + { >> > + bars[i].type = VPCI_BAR_EMPTY; >> > + continue; >> > + } >> > + >> > + bars[i].addr = (cmd & PCI_COMMAND_MEMORY) ? addr : INVALID_PADDR; >> >> This doesn't match up with logic further up: When the memory decode >> bit gets cleared, you don't zap the addresses, so I think you'd better >> store it here too. Use INVALID_PADDR only when the value read has >> all address bits set (same caveat as pointed out earlier). > >OK, note that .addr can only possibly be INVALID_PADDR at >initialization time, once the user has written something to the BAR >.addr will be different than INVALID_PADDR. Which is part of what worries me - it would be better if the field wouldn't ever hold a special init-time-only value. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |