[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] xen/dom0: Improve documentation for dom0= and dom0-iommu=



>>> On 21.12.18 at 00:40, <andrew.cooper3@xxxxxxxxxx> wrote:
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -636,55 +636,76 @@ trace feature is only enabled in debugging builds of 
> Xen.
>  
>  Specify the bit width of the DMA heap.
>  
> -### dom0 (x86)
> -> `= List of [ pvh | shadow ]`
> +### dom0
> +> `= List of [ pvh=<bool>, shadow=<bool> ]`
>  
> -> Sub-options:
> -
> -> `pvh`
> +> Applicability: x86

Why the new tag, when everything else uses (x86) next to the
option name?

>  ### dom0-iommu
> -> `= List of [ passthrough | strict | map-inclusive ]`
> -
> -This list of booleans controls the iommu usage by Dom0:
> -
> -* `passthrough`: disables DMA remapping for Dom0. Default is `false`. Note 
> that
> -  this option is hard coded to `false` for a PVH Dom0 and any attempt to
> -  overwrite it from the command line is ignored.
> -
> -* `strict`: sets up DMA remapping only for the RAM Dom0 actually got 
> assigned.
> -  Default is `false` which means Dom0 will get mappings for all the host
> -  RAM except regions in use by Xen. Note that this option is hard coded to
> -  `true` for a PVH Dom0 and any attempt to overwrite it from the command line
> -  is ignored.
> -
> -* `map-inclusive`: sets up DMA remapping for all the non-RAM regions below 
> 4GB
> -  except for unusable ranges. Use this to work around firmware issues 
> providing
> -  incorrect RMRR/IVMD entries. Rather than only mapping RAM pages for IOMMU
> -  accesses for Dom0, with this option all pages up to 4GB, not marked as
> -  unusable in the E820 table, will get a mapping established. Note that this
> -  option is only applicable to a PV Dom0 and is enabled by default on Intel
> -  hardware.
> -
> -* `map-reserved`: sets up DMA remapping for all the reserved regions in the
> -  memory map for Dom0. Use this to work around firmware issues providing
> -  incorrect RMRR/IVMD entries. Rather than only mapping RAM pages for IOMMU
> -  accesses for Dom0, all memory regions marked as reserved in the memory map
> -  that don't overlap with any MMIO region from emulated devices will be
> -  identity mapped. This option maps a subset of the memory that would be
> -  mapped when using the `map-inclusive` option. This option is available to 
> all
> -  Dom0 modes and is enabled by default on Intel hardware.
> +> `= List of [ passthrough=<bool>, strict=<bool>, map-inclusive=<bool>,
> +>              map-reserved=<bool> ]`
> +
> +Controls for the dom0 IOMMU setup.
> +
> +*   The `passthrough` boolean is applicable to x86 PV dom0's only and 
> defaults
> +    to false.  It controls whether the IOMMU is fully disabled for devices
> +    belonging to dom0 (`passthrough=1`), or whether the IOMMU is set up with
> +    an identity transform for dom0 (`passthrough=0`) to prevent dom0 from
> +    DMA'ing outside of its permitted areas.
> +
> +    This option is hardwired to false for x86 PVH dom0's (where a 
> non-identity
> +    transform is required for dom0 to function), and is ignored for ARM.
> +
> +*   The `strict` boolean is applicable to x86 PV dom0's only and defaults to
> +    false.  It controls whether dom0 can have IOMMU mappings for all domain
> +    RAM in the system, or only for its allocated RAM (and grant mappings 
> etc.)
> +
> +    This option is hardwired to true for x86 PVH dom0's (as RAM belonging to
> +    other domains in the system don't live in a compatible address space), 
> and
> +    is ignored for ARM.
> +
> +*   The `map-inclusive` boolean is applicable to x86 PV dom0's, and sets up 
> DMA
> +    remapping for all non-RAM regions below 4GB except for unusable ranges.

I don't thinks this is PV-specific, just its default is.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.