[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 8/9] xue: mark DMA buffers as reserved for the device
On 06.07.2022 17:32, Marek Marczykowski-Górecki wrote: > The important part is to include those buffers in IOMMU page table > relevant for the USB controller. Otherwise, DbC will stop working as > soon as IOMMU is enabled, regardless of to which domain device assigned > (be it xen or dom0). > If the device is passed through to dom0 or other domain (see later > patches), that domain will effectively have access to those buffers too. > It does give such domain yet another way to DoS the system (as is the > case when having PCI device assigned already), but also possibly steal > the console ring content. Thus, such domain should be a trusted one. > In any case, prevent anything else being placed on those pages by adding > artificial padding. I don't think this device should be allowed to be assigned to any DomU. Just like the EHCI driver, at some point in the series you will want to call pci_hide_device() (you may already do, and I may merely have missed it). > Using this API for DbC pages requires raising MAX_USER_RMRR_PAGES. I disagree here: This is merely an effect of you re-using the pre- existing functionality there with too little customization. I think the respective check shouldn't be applied for internal uses. > @@ -952,13 +953,21 @@ static struct uart_driver xue_uart_driver = { > .flush = xue_uart_flush, > }; > > -static struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE); > -static struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE); > -static struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE); > -static struct xue_erst_segment erst __aligned(64); > -static struct xue_dbc_ctx ctx __aligned(64); > -static uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE); > -static char str_buf[XUE_PAGE_SIZE] __aligned(64); > +struct xue_dma_bufs { > + struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE); > + struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE); > + struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE); > + struct xue_erst_segment erst __aligned(64); > + struct xue_dbc_ctx ctx __aligned(64); > + uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE); > + char str_buf[XUE_PAGE_SIZE] __aligned(64); Please arrange data such that the amount of unused (padding) space is minimal. If I'm not mistaken the page-size-aligned fields are also an even multiple of pages in size, in which case they all want to go ahead of the 64-byte aligned but sub-page-size fields (as per an earlier comment str_buf[] likely can go away anyway, being the one field which is a page in size but having smaller alignment). > + /* > + * Don't place anything else on this page - it will be > + * DMA-reachable by the USB controller. > + */ > + char _pad[0] __aligned(XUE_PAGE_SIZE); I don't think this is needed, due to sizeof() being required to be a multiple of alignof(). > +}; > +static struct xue_dma_bufs xue_dma_bufs __aligned(XUE_PAGE_SIZE); I don't think the alignment here is needed, as the struct will already have suitable alignment (derived from the biggest field alignment value). Instead please consider putting this in .bss.page_aligned. > @@ -990,16 +999,22 @@ void __init xue_uart_init(void) > xue->sbdf = PCI_SBDF(0, bus, slot, func); > } > > - xue->dbc_ctx = &ctx; > - xue->dbc_erst = &erst; > - xue->dbc_ering.trb = evt_trb; > - xue->dbc_oring.trb = out_trb; > - xue->dbc_iring.trb = in_trb; > - xue->dbc_owork.buf = wrk_buf; > - xue->dbc_str = str_buf; > + xue->dbc_ctx = &xue_dma_bufs.ctx; > + xue->dbc_erst = &xue_dma_bufs.erst; > + xue->dbc_ering.trb = xue_dma_bufs.evt_trb; > + xue->dbc_oring.trb = xue_dma_bufs.out_trb; > + xue->dbc_iring.trb = xue_dma_bufs.in_trb; > + xue->dbc_owork.buf = xue_dma_bufs.wrk_buf; > + xue->dbc_str = xue_dma_bufs.str_buf; > > if ( xue_open(xue) ) > + { > + iommu_add_extra_reserved_device_memory( > + PFN_DOWN(virt_to_maddr(&xue_dma_bufs)), virt_to_pfn()? Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |