[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 6/6] xen-pt: Round pci regions sizes to XEN_PAGE_SIZE
On Wed, Mar 20, 2019 at 01:28:47PM -0400, Jason Andryuk wrote: > On Fri, Mar 15, 2019 at 12:28 PM Andrew Cooper > <andrew.cooper3@xxxxxxxxxx> wrote: > > > > On 15/03/2019 09:17, Paul Durrant wrote: > > >> -----Original Message----- > > >> From: Jason Andryuk [mailto:jandryuk@xxxxxxxxx] > > >> Sent: 14 March 2019 18:16 > > >> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > > >> Cc: qemu-devel@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxxx; > > >> marmarek@xxxxxxxxxxxxxxxxxxxxxx; Simon > > >> Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx>; Stefano Stabellini > > >> <sstabellini@xxxxxxxxxx>; Anthony Perard > > >> <anthony.perard@xxxxxxxxxx> > > >> Subject: Re: [PATCH 6/6] xen-pt: Round pci regions sizes to XEN_PAGE_SIZE > > >> > > >> On Wed, Mar 13, 2019 at 11:09 AM Paul Durrant <Paul.Durrant@xxxxxxxxxx> > > >> wrote: > > >>>> -----Original Message----- > > >>>> From: Jason Andryuk [mailto:jandryuk@xxxxxxxxx] > > >>>> Sent: 11 March 2019 18:02 > > >>>> To: qemu-devel@xxxxxxxxxx > > >>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; marmarek@xxxxxxxxxxxxxxxxxxxxxx; > > >>>> Simon Gaiser > > >>>> <simon@xxxxxxxxxxxxxxxxxxxxxx>; Jason Andryuk <jandryuk@xxxxxxxxx>; > > >>>> Stefano Stabellini > > >>>> <sstabellini@xxxxxxxxxx>; Anthony Perard <anthony.perard@xxxxxxxxxx>; > > >>>> Paul Durrant > > >>>> <Paul.Durrant@xxxxxxxxxx> > > >>>> Subject: [PATCH 6/6] xen-pt: Round pci regions sizes to XEN_PAGE_SIZE > > >>>> > > >>>> From: Simon Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx> > > >>>> > > >>>> If a pci memory region has a size < XEN_PAGE_SIZE it can get located at > > >>>> an address which is not page aligned. > > >>> IIRC the PCI spec says that the minimum memory region size should be at > > >>> least 4k. Should we even be > > >> tolerating BARs smaller than that? > > >>> Paul > > >>> > > >> Hi, Paul. > > >> > > >> Simon found this, so it affects a real device. Simon, do you recall > > >> which device was affected? > > >> > > >> I think BARs only need to be power-of-two size and aligned, and 4k is > > >> not a minimum. 16bytes may be a minimum, but I don't know what the > > >> spec says. > > >> > > >> On an Ivy Bridge system, here are some of the devices with BARs smaller > > >> than 4K: > > >> 00:16.0 Communication controller: Intel Corporation 7 Series/C210 > > >> Series Chipset Family MEI Controller #1 (rev 04) > > >> Memory at d0735000 (64-bit, non-prefetchable) [disabled] [size=16] > > >> 00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset > > >> Family USB Enhanced Host Controller #1 (rev 04) (prog-if 20 [EHCI]) > > >> Memory at d0739000 (32-bit, non-prefetchable) [disabled] [size=1K] > > >> 00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family > > >> SMBus Controller (rev 04) > > >> Memory at d0734000 (64-bit, non-prefetchable) [disabled] [size=256] > > >> 02:00.0 System peripheral: JMicron Technology Corp. SD/MMC Host > > >> Controller (rev 30) > > >> Memory at d0503000 (32-bit, non-prefetchable) [disabled] [size=256] > > >> > > >> These examples are all 4K aligned, so this is not an issue on this > > >> machine. > > >> > > >> Reviewing the code, I'm now wondering if the following in > > >> hw/xen/xen_pt.c:xen_pt_region_update is wrong: rc = > > >> xc_domain_memory_mapping(xen_xc, xen_domid, > > >> XEN_PFN(guest_addr + XC_PAGE_SIZE - > > >> 1), > > >> XEN_PFN(machine_addr + XC_PAGE_SIZE > > >> - 1), > > >> XEN_PFN(size + XC_PAGE_SIZE - 1), > > >> op); > > >> > > >> If a bar of size 0x100 is at 0xd0500800, then the machine_addr passed > > >> in would be 0xd0501000 which is past the actual location. Should the > > >> call arguments just be XEN_PFN(guest_addr) & XEN_PFN(machine_addr)? > > >> > > >> BARs smaller than a page would also be a problem if BARs for different > > >> devices shared the same page. > > > Exactly. We cannot pass them through with any degree of safety (not that > > > passthrough of an arbitrary device is a particularly safe thing to do > > > anyway). The xen-pt code would instead need to trap those BARs and > > > perform the accesses to the real BAR itself. Ultimately though I think we > > > should be retiring the xen-pt code in favour of a standalone emulator. > > > > It doesn't matter if the BAR is smaller than 4k, if there are holes next > > to it. > > > > Do we know what the case is in practice for these USB controllers? > > > > If the worst comes to the worst, we can re-enumerate the PCI bus to > > ensure that all bars smaller than 4k still have 4k alignment between > > them. That way we can safely pass them through even when they are smaller. > > Andrew, thanks for checking the spec on the minimum BAR size. > > Dropping the Round PCI region patch from QMEU, the guest HVM will have: > > 00:06.0 SD Host controller: Ricoh Co Ltd PCIe SDXC/MMC Host Controller (rev > 07) > Memory at f2028800 (32-bit, non-prefetchable) [size=256] > 00:07.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host > Controller (rev 04) (prog-if 30 [XHCI]) > Memory at f2024000 (64-bit, non-prefetchable) [size=8K] > 00:08.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset > Family USB Enhanced Host Controller #2 (rev 05) (prog-if 20 [EHCI]) > Memory at f2028000 (32-bit, non-prefetchable) [size=1K] > 00:09.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset > Family USB Enhanced Host Controller #1 (rev 05) (prog-if 20 [EHCI]) > Memory at f2028400 (32-bit, non-prefetchable) [size=1K] > > 00:09.0, 00:08.0 & 00:06.0 all share the same page. Only 00:08.0 is > working. With some added debugging output, you'll see that the same > page* is used for three of the BARs. > > [00:06.0] mapping guest_addr 0xf2028800 gfn 0xf2028 to maddr > 0xe1a30000 mfn 0xe1a30 > [00:07.0] mapping guest_addr 0xf2024000 gfn 0xf2024 to maddr > 0xe0800000 mfn 0xe0800 > [00:09.0] mapping guest_addr 0xf2028400 gfn 0xf2028 to maddr > 0xe1900000 mfn 0xe1900 > [00:08.0] mapping guest_addr 0xf2028000 gfn 0xf2028 to maddr > 0xe1a2f000 mfn 0xe1a2f The patch below should prevent hvmloader from placing multiple BARs on the same page, could you give it a try? Note that this is not going to prevent the guest from moving those BARs around and place them in the same page, thus breaking the initial placement done by hvmloader. Thanks, Roger. ---8<--- diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c index 0b708bf578..c433b34cd6 100644 --- a/tools/firmware/hvmloader/pci.c +++ b/tools/firmware/hvmloader/pci.c @@ -489,6 +489,10 @@ void pci_setup(void) resource->base = base; + if ( (bar_data & PCI_BASE_ADDRESS_SPACE) == + PCI_BASE_ADDRESS_SPACE_MEMORY ) + resource->base = ROUNDUP(resource->base, PAGE_SIZE); + pci_writel(devfn, bar_reg, bar_data); if (using_64bar) pci_writel(devfn, bar_reg + 4, bar_data_upper); diff --git a/tools/firmware/hvmloader/util.h b/tools/firmware/hvmloader/util.h index 7bca6418d2..b5554b5844 100644 --- a/tools/firmware/hvmloader/util.h +++ b/tools/firmware/hvmloader/util.h @@ -51,6 +51,8 @@ void __bug(char *file, int line) __attribute__((noreturn)); #define MB(mb) (mb##ULL << 20) #define GB(gb) (gb##ULL << 30) +#define ROUNDUP(x, a) (((x) + (a) - 1) & ~((a) - 1)) + static inline int test_bit(unsigned int b, const void *p) { return !!(((const uint8_t *)p)[b>>3] & (1u<<(b&7))); _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |