[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 8/9] xue: mark DMA buffers as reserved for the device


  • To: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 14 Jul 2022 13:51:06 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wRJvKi/TcqUW/3TSHShq/4XKra83Qjczh2+hSwFAFZk=; b=dq8nnzlDv4IzpCTpR/dtHCB1f40tvepGlQOLDgYrXZSz1ZbA2vpg2aprQwUh+VeUmWjVHD0c/T9j2MPV1AmPZ4oXvi2uMi7kOGHmgWaFC3ol5qjLTCKVE91tz+Mgoh3Tch7Rtm92xqcMXnmn7Hl6eeF9oh0MHkY92lQXu2o85Cy7QUysmiyxuvA4m1Nrki5sBMnXu+vu5E0MMPRB8pa3QvCFV8jMdKnt6UfYd/Byz2x+7uSH+A1aEh0WFDGNluuNhAermVxs1ZlK7UkQrWHoYImfQ9GMqpK4HJkoFtsflf9oW60J7TOhr7MHHXhVbHGfW5h8SEGT26uzQy4g0h0Z1Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SYq/ARUONuBH7Yzjy3+s55DRn9H1OZ4f/4hCTMdw/WZ55diKm6z7OZ8BN4tIrrIOzv/GUBEVoxXFF9Q0cv38KzXxeddkxq4wZVQvYLEY9r/NL7UxiT3W4fnDe9SN7m0vnCBkneW0QzLa85WHb/OJai31unPR9YyTHisZncdpe7DoICyWDp01HXOTpX3lrHdSSvscLc41iuCjvMcSWdFGsSUoHh6aboFWmBgQrynUlaiwP3rUwKbacZKInmrGR3KphQH/NregoRm2Rpu7vZ/YNRVlnQG45NQJlEvxRspy8/PUP8HvS/Pd3Yx8Llxt//UL9MlmjCUwocecHDpbb1tkYQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 14 Jul 2022 11:51:50 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 06.07.2022 17:32, Marek Marczykowski-Górecki wrote:
> The important part is to include those buffers in IOMMU page table
> relevant for the USB controller. Otherwise, DbC will stop working as
> soon as IOMMU is enabled, regardless of to which domain device assigned
> (be it xen or dom0).
> If the device is passed through to dom0 or other domain (see later
> patches), that domain will effectively have access to those buffers too.
> It does give such domain yet another way to DoS the system (as is the
> case when having PCI device assigned already), but also possibly steal
> the console ring content. Thus, such domain should be a trusted one.
> In any case, prevent anything else being placed on those pages by adding
> artificial padding.

I don't think this device should be allowed to be assigned to any
DomU. Just like the EHCI driver, at some point in the series you
will want to call pci_hide_device() (you may already do, and I may
merely have missed it).

> Using this API for DbC pages requires raising MAX_USER_RMRR_PAGES.

I disagree here: This is merely an effect of you re-using the pre-
existing functionality there with too little customization. I think
the respective check shouldn't be applied for internal uses.

> @@ -952,13 +953,21 @@ static struct uart_driver xue_uart_driver = {
>      .flush = xue_uart_flush,
>  };
>  
> -static struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
> -static struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
> -static struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
> -static struct xue_erst_segment erst __aligned(64);
> -static struct xue_dbc_ctx ctx __aligned(64);
> -static uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE);
> -static char str_buf[XUE_PAGE_SIZE] __aligned(64);
> +struct xue_dma_bufs {
> +    struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
> +    struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
> +    struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
> +    struct xue_erst_segment erst __aligned(64);
> +    struct xue_dbc_ctx ctx __aligned(64);
> +    uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE);
> +    char str_buf[XUE_PAGE_SIZE] __aligned(64);

Please arrange data such that the amount of unused (padding) space
is minimal. If I'm not mistaken the page-size-aligned fields are
also an even multiple of pages in size, in which case they all want
to go ahead of the 64-byte aligned but sub-page-size fields (as
per an earlier comment str_buf[] likely can go away anyway, being
the one field which is a page in size but having smaller alignment).

> +    /*
> +     * Don't place anything else on this page - it will be
> +     * DMA-reachable by the USB controller.
> +     */
> +    char _pad[0] __aligned(XUE_PAGE_SIZE);

I don't think this is needed, due to sizeof() being required to be
a multiple of alignof().

> +};
> +static struct xue_dma_bufs xue_dma_bufs __aligned(XUE_PAGE_SIZE);

I don't think the alignment here is needed, as the struct will
already have suitable alignment (derived from the biggest field
alignment value). Instead please consider putting this in
.bss.page_aligned.

> @@ -990,16 +999,22 @@ void __init xue_uart_init(void)
>          xue->sbdf = PCI_SBDF(0, bus, slot, func);
>      }
>  
> -    xue->dbc_ctx = &ctx;
> -    xue->dbc_erst = &erst;
> -    xue->dbc_ering.trb = evt_trb;
> -    xue->dbc_oring.trb = out_trb;
> -    xue->dbc_iring.trb = in_trb;
> -    xue->dbc_owork.buf = wrk_buf;
> -    xue->dbc_str = str_buf;
> +    xue->dbc_ctx = &xue_dma_bufs.ctx;
> +    xue->dbc_erst = &xue_dma_bufs.erst;
> +    xue->dbc_ering.trb = xue_dma_bufs.evt_trb;
> +    xue->dbc_oring.trb = xue_dma_bufs.out_trb;
> +    xue->dbc_iring.trb = xue_dma_bufs.in_trb;
> +    xue->dbc_owork.buf = xue_dma_bufs.wrk_buf;
> +    xue->dbc_str = xue_dma_bufs.str_buf;
>  
>      if ( xue_open(xue) )
> +    {
> +        iommu_add_extra_reserved_device_memory(
> +                PFN_DOWN(virt_to_maddr(&xue_dma_bufs)),

virt_to_pfn()?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.