[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 5/7] xen/arm: zynqmp: eemi access control



On Tue, 11 Dec 2018, Julien Grall wrote:
> Hi,
> 
> On 03/12/2018 21:03, Stefano Stabellini wrote:
> > From: "Edgar E. Iglesias" <edgar.iglesias@xxxxxxxxxx>
> > 
> > From: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>
> > 
> > Introduce data structs to implement basic access controls.
> > Introduce the following three functions:
> > 
> > domain_has_node_access: check access to the node
> > domain_has_reset_access: check access to the reset line
> > domain_has_mmio_access: check access to the register
> > 
> > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>
> > Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>
> > 
> > ---
> > Statically defines:
> > 
> > - pm_node_access
> > It encodes the relationship between a node id and the start of the MMIO
> > region of a device in the corresponding power domain. It is used for
> > permission checking. Although the MMIO region start address is available
> > on device tree and could be derived from there (we plan to improve that
> > in the future), the relationship between a node id and corresponding
> > devices is not described and needs to be hardcoded.
> > 
> > - pm_reset_access
> > Same as pm_node_access for reset lines.
> > 
> > ---
> > Changes in v5:
> > - improve in-code comments
> > - use mfn_t in struct pm_access
> > - remove mmio_access table
> > 
> > Changes in v4:
> > - add #include as needed
> > - add #if 0 for bisectability
> > - use mfn_t in pm_check_access
> > - add wrap-around ASSERT in domain_has_mmio_access
> > - use GENMASK in domain_has_mmio_access
> > - proper bound checks (== ARRAY_SIZE is out of bound)
> > ---
> >   xen/arch/arm/platforms/xilinx-zynqmp-eemi.c | 348
> > ++++++++++++++++++++++++++++
> >   1 file changed, 348 insertions(+)
> > 
> > diff --git a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
> > b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
> > index 369bb3f..92a02df 100644
> > --- a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
> > +++ b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
> > @@ -16,9 +16,357 @@
> >    * GNU General Public License for more details.
> >    */
> >   +/*
> > + *  EEMI Power Management API access
> > + *
> > + * Refs:
> > + *
> > https://www.xilinx.com/support/documentation/user_guides/ug1200-eemi-api.pdf
> > + *
> > + * Background:
> > + * The ZynqMP has a subsystem named the PMU with a CPU and special devices
> > + * dedicated to running Power Management Firmware. Other masters in the
> > + * system need to send requests to the PMU in order to for example:
> > + * * Manage power state
> > + * * Configure clocks
> > + * * Program bitstreams for the programmable logic
> > + * * etc
> > + *
> > + * Although the details of the setup are configurable, in the common case
> > + * the PMU lives in the Secure world. NS World cannot directly communicate
> > + * with it and must use proxy services from ARM Trusted Firmware to reach
> > + * the PMU.
> > + *
> > + * Power Management on the ZynqMP is implemented in a layered manner.
> > + * The PMU knows about various masters and will enforce access controls
> > + * based on a pre-configured partitioning. This configuration dictates
> > + * which devices are owned by the various masters and the PMU FW makes sure
> > + * that a given master cannot turn off a device that it does not own or
> > that
> > + * is in use by other masters.
> > + *
> > + * The PMU is not aware of multiple execution states in masters.
> > + * For example, it treats the ARMv8 cores as single units and does not
> > + * distinguish between Secure vs NS OS's nor does it know about Hypervisors
> > + * and multiple guests. It is up to software on the ARMv8 cores to present
> > + * a unified view of its power requirements.
> > + *
> > + * To implement this unified view, ARM Trusted Firmware at EL3 provides
> > + * access to the PM API via SMC calls. ARM Trusted Firmware is responsible
> > + * for mediating between the Secure and the NS world, rejecting SMC calls
> > + * that request changes that are not allowed.
> > + *
> > + * Xen running above ATF owns the NS world and is responsible for
> > presenting
> > + * unified PM requests taking all guests and the hypervisor into account.
> > + *
> > + * Implementation:
> > + * The PM API contains different classes of calls.
> > + * Certain calls are harmless to expose to any guest.
> > + * These include calls to get the PM API Version, or to read out the
> > version
> > + * of the chip we're running on.
> > + *
> > + * In order to correctly virtualize these calls, we need to know if
> > + * guests issuing these calls have ownership of the given device.
> > + * The approach taken here is to map PM API Nodes identifying
> > + * a device into base addresses for registers that belong to that
> > + * same device.
> > + *
> > + * If the guest has access to devices registers, we give the guest
> > + * access to PM API calls that affect that device. This is implemented
> > + * by pm_node_access and domain_has_node_access().
> > + */
> > +
> > +#include <xen/iocap.h>
> > +#include <xen/sched.h>
> >   #include <asm/regs.h>
> >   #include <asm/platforms/xilinx-zynqmp-eemi.h>
> >   +#if 0
> > +struct pm_access
> > +{
> > +    mfn_t mfn;
> > +    bool hwdom_access;    /* HW domain gets access regardless.  */
> > +};
> > +
> > +/*
> > + * This table maps a node into a memory address.
> 
> Some of the nodes below don't have memory address. So this comment has to be
> updated.

Yes, I'll update and improve the comment


> > + * If a guest has access to the address, it has enough control
> > + * over the node to grant it access to EEMI calls for that node.
> > + */
> > +static const struct pm_access pm_node_access[] = {
> > +    /* MM_RPU grants access to all RPU Nodes.  */
> > +    [NODE_RPU] = { mfn_init(MM_RPU) },
> > +    [NODE_RPU_0] = { mfn_init(MM_RPU) },
> > +    [NODE_RPU_1] = { mfn_init(MM_RPU) },
> > +    [NODE_IPI_RPU_0] = { mfn_init(MM_RPU) },
> > +
> > +    /* GPU nodes.  */
> > +    [NODE_GPU] = { mfn_init(MM_GPU) },
> > +    [NODE_GPU_PP_0] = { mfn_init(MM_GPU) },
> > +    [NODE_GPU_PP_1] = { mfn_init(MM_GPU) },
> > +
> > +    [NODE_USB_0] = { mfn_init(MM_USB3_0_XHCI) },
> > +    [NODE_USB_1] = { mfn_init(MM_USB3_1_XHCI) },
> > +    [NODE_TTC_0] = { mfn_init(MM_TTC0) },
> > +    [NODE_TTC_1] = { mfn_init(MM_TTC1) },
> > +    [NODE_TTC_2] = { mfn_init(MM_TTC2) },
> > +    [NODE_TTC_3] = { mfn_init(MM_TTC3) },
> > +    [NODE_SATA] = { mfn_init(MM_SATA_AHCI_HBA) },
> > +    [NODE_ETH_0] = { mfn_init(MM_GEM0) },
> > +    [NODE_ETH_1] = { mfn_init(MM_GEM1) },
> > +    [NODE_ETH_2] = { mfn_init(MM_GEM2) },
> > +    [NODE_ETH_3] = { mfn_init(MM_GEM3) },
> > +    [NODE_UART_0] = { mfn_init(MM_UART0) },
> > +    [NODE_UART_1] = { mfn_init(MM_UART1) },
> > +    [NODE_SPI_0] = { mfn_init(MM_SPI0) },
> > +    [NODE_SPI_1] = { mfn_init(MM_SPI1) },
> > +    [NODE_I2C_0] = { mfn_init(MM_I2C0) },
> > +    [NODE_I2C_1] = { mfn_init(MM_I2C1) },
> > +    [NODE_SD_0] = { mfn_init(MM_SD0) },
> > +    [NODE_SD_1] = { mfn_init(MM_SD1) },
> > +    [NODE_DP] = { mfn_init(MM_DP) },
> > +
> > +    /* Guest with GDMA Channel 0 gets PM access. Other guests don't.  */
> > +    [NODE_GDMA] = { mfn_init(MM_GDMA_CH0) },
> > +    /* Guest with ADMA Channel 0 gets PM access. Other guests don't.  */
> > +    [NODE_ADMA] = { mfn_init(MM_ADMA_CH0) },
> > +
> > +    [NODE_NAND] = { mfn_init(MM_NAND) },
> > +    [NODE_QSPI] = { mfn_init(MM_QSPI) },
> > +    [NODE_GPIO] = { mfn_init(MM_GPIO) },
> > +    [NODE_CAN_0] = { mfn_init(MM_CAN0) },
> > +    [NODE_CAN_1] = { mfn_init(MM_CAN1) },
> > +
> > +    /* Only for the hardware domain.  */
> > +    [NODE_AFI] = { .hwdom_access = true },
> > +    [NODE_APLL] = { .hwdom_access = true },
> > +    [NODE_VPLL] = { .hwdom_access = true },
> > +    [NODE_DPLL] = { .hwdom_access = true },
> > +    [NODE_RPLL] = { .hwdom_access = true },
> > +    [NODE_IOPLL] = { .hwdom_access = true },
> > +    [NODE_DDR] = { .hwdom_access = true },
> > +    [NODE_IPI_APU] = { .hwdom_access = true },
> > +    [NODE_PCAP] = { .hwdom_access = true },
> > +
> > +    [NODE_PCIE] = { mfn_init(MM_PCIE_ATTRIB) },
> > +    [NODE_RTC] = { mfn_init(MM_RTC) },
> > +};
> > +
> > +/*
> > + * This table maps reset line IDs into a memory address.
> 
> Same here.
> 
> > + * If a guest has access to the address, it has enough control
> > + * over the affected node to grant it access to EEMI calls for
> > + * resetting that node.
> > + */
> > +#define XILPM_RESET_IDX(n) (n - XILPM_RESET_PCIE_CFG)
> > +static const struct pm_access pm_reset_access[] = {
> > +    [XILPM_RESET_IDX(XILPM_RESET_PCIE_CFG)] = { mfn_init(MM_AXIPCIE_MAIN)
> > },
> > +    [XILPM_RESET_IDX(XILPM_RESET_PCIE_BRIDGE)] = { mfn_init(MM_PCIE_ATTRIB)
> > },
> > +    [XILPM_RESET_IDX(XILPM_RESET_PCIE_CTRL)] = { mfn_init(MM_PCIE_ATTRIB)
> > },
> > +
> > +    [XILPM_RESET_IDX(XILPM_RESET_DP)] = { mfn_init(MM_DP) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_SWDT_CRF)] = { mfn_init(MM_SWDT) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM5)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM4)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM3)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM2)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM1)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM0)] = { .hwdom_access = true },
> > +
> > +    /* Channel 0 grants PM access.  */
> > +    [XILPM_RESET_IDX(XILPM_RESET_GDMA)] = { mfn_init(MM_GDMA_CH0) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPU_PP1)] = { mfn_init(MM_GPU) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPU_PP0)] = { mfn_init(MM_GPU) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GT)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_SATA)] = { mfn_init(MM_SATA_AHCI_HBA) },
> > +
> > +    [XILPM_RESET_IDX(XILPM_RESET_APM_FPD)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_SOFT)] = { .hwdom_access = true },
> > +
> > +    [XILPM_RESET_IDX(XILPM_RESET_GEM0)] = { mfn_init(MM_GEM0) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GEM1)] = { mfn_init(MM_GEM1) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GEM2)] = { mfn_init(MM_GEM2) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GEM3)] = { mfn_init(MM_GEM3) },
> > +
> > +    [XILPM_RESET_IDX(XILPM_RESET_QSPI)] = { mfn_init(MM_QSPI) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_UART0)] = { mfn_init(MM_UART0) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_UART1)] = { mfn_init(MM_UART1) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_SPI0)] = { mfn_init(MM_SPI0) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_SPI1)] = { mfn_init(MM_SPI1) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_SDIO0)] = { mfn_init(MM_SD0) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_SDIO1)] = { mfn_init(MM_SD1) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_CAN0)] = { mfn_init(MM_CAN0) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_CAN1)] = { mfn_init(MM_CAN1) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_I2C0)] = { mfn_init(MM_I2C0) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_I2C1)] = { mfn_init(MM_I2C1) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_TTC0)] = { mfn_init(MM_TTC0) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_TTC1)] = { mfn_init(MM_TTC1) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_TTC2)] = { mfn_init(MM_TTC2) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_TTC3)] = { mfn_init(MM_TTC3) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_SWDT_CRL)] = { mfn_init(MM_SWDT) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_NAND)] = { mfn_init(MM_NAND) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_ADMA)] = { mfn_init(MM_ADMA_CH0) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPIO)] = { mfn_init(MM_GPIO) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_IOU_CC)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_TIMESTAMP)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_RPU_R50)] = { mfn_init(MM_RPU) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_RPU_R51)] = { mfn_init(MM_RPU) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_RPU_AMBA)] = { mfn_init(MM_RPU) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_OCM)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_RPU_PGE)] = { mfn_init(MM_RPU) },
> > +
> > +    [XILPM_RESET_IDX(XILPM_RESET_USB0_CORERESET)] = {
> > mfn_init(MM_USB3_0_XHCI) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_USB0_HIBERRESET)] = {
> > mfn_init(MM_USB3_0_XHCI) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_USB0_APB)] = { mfn_init(MM_USB3_0_XHCI) },
> > +
> > +    [XILPM_RESET_IDX(XILPM_RESET_USB1_CORERESET)] = {
> > mfn_init(MM_USB3_1_XHCI) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_USB1_HIBERRESET)] = {
> > mfn_init(MM_USB3_1_XHCI) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_USB1_APB)] = { mfn_init(MM_USB3_1_XHCI) },
> > +
> > +    [XILPM_RESET_IDX(XILPM_RESET_IPI)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_APM_LPD)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_RTC)] = { mfn_init(MM_RTC) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_SYSMON)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM6)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_LPD_SWDT)] = { mfn_init(MM_SWDT) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_FPD)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_RPU_DBG1)] = { mfn_init(MM_RPU) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_RPU_DBG0)] = { mfn_init(MM_RPU) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_DBG_LPD)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_DBG_FPD)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_APLL)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_DPLL)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_VPLL)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_IOPLL)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_RPLL)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_0)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_1)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_2)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_3)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_4)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_5)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_6)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_7)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_8)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_9)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_10)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_11)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_12)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_13)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_14)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_15)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_16)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_17)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_18)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_19)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_20)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_21)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_22)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_23)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_24)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_25)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_26)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_27)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_28)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_29)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_30)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_31)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_RPU_LS)] = { mfn_init(MM_RPU) },
> > +    [XILPM_RESET_IDX(XILPM_RESET_PS_ONLY)] = { .hwdom_access = true },
> > +    [XILPM_RESET_IDX(XILPM_RESET_PL)] = { .hwdom_access = true },
> > +};
> > +
> > +static bool pm_check_access(const struct pm_access *acl, struct domain *d,
> > +                            uint32_t idx)
> > +{
> > +    if ( acl[idx].hwdom_access && is_hardware_domain(d) )
> > +        return true;
> > +
> > +    if ( !mfn_x(acl[idx].mfn) )
> 
> Technically 0 is a valid mfn. If you want to encode an invalid value then
> MFN_INVALID is safer.
> 
> But what are you trying to prevent? Are the node IDs not allocated
> contiguously?

I improved the comments above now. But the idea is that a zero address
means nobody has access. None of those resources have actually a zero
address, so there are no risks of conflicts with any valid address zero.

Following a comment by Jan, I have also started using MFN_INVALID to
encode only dom0 has access, getting rid of .hwdom_access completely.

So in the next series:

- address -> regular check
- 0 -> NO
- INVALID_MFN -> dom0 only


> > +        return false;
> > +
> > +    return iomem_access_permitted(d, mfn_x(acl[idx].mfn),
> > mfn_x(acl[idx].mfn));
> > +}
> > +
> > +/* Check if a domain has access to a node.  */
> > +static bool domain_has_node_access(struct domain *d, uint32_t nodeid)
> > +{
> > +    if ( nodeid >= ARRAY_SIZE(pm_node_access) )
> > +        return false;
> > +
> > +    return pm_check_access(pm_node_access, d, nodeid);
> > +}
> > +
> > +/* Check if a domain has access to a reset line.  */
> > +static bool domain_has_reset_access(struct domain *d, uint32_t rst)
> > +{
> > +    if ( rst < XILPM_RESET_PCIE_CFG )
> > +        return false;
> > +
> > +    rst -= XILPM_RESET_PCIE_CFG;
> > +
> > +    if ( rst >= ARRAY_SIZE(pm_reset_access) )
> > +        return false;
> 
> I think you can make this code no rely at all on XILPM_RESET_PCIE_CFG by using
> XILPM_RESET_IDX.
> 
> if ( XILPM_RESET_IDX(rst) >= ARRAY_SIZE(pm_reset_access) )
>    return false;
> 
> rst = XILPM_RESET_IDX(rst);
> 
> We rely on the unsigned underflow to catch value below XILPM_RESET_PCIE_CFG
> and make the code less error prone to change the array without the code here.

Good idea, I'll do that


> > +
> > +    return pm_check_access(pm_reset_access, d, rst);
> > +}
> > +
> > +/*
> > + * Check if a given domain has access to perform an indirect
> > + * MMIO access.
> > + *
> > + * If the provided mask is invalid, it will be fixed up.
> > + */
> > +static bool domain_has_mmio_access(struct domain *d,
> > +                                   bool write, paddr_t addr,
> > +                                   uint32_t *mask)
> > +{
> > +    unsigned int i;
> > +    bool ret = false;
> > +    uint32_t prot_mask = 0;
> > +
> > +    /*
> > +     * The hardware domain gets read access to everything.
> > +     * Lower layers will do further filtering.
> > +     */
> > +    if ( !write && is_hardware_domain(d) )
> > +        return true;
> > +
> > +    /* Scan the ACL.  */
> > +    for ( i = 0; i < ARRAY_SIZE(pm_mmio_access); i++ )
> > +    {
> > +        ASSERT(pm_mmio_access[i].start + pm_mmio_access[i].size >=
> > +               pm_mmio_access[i].start);
> > +
> > +        if ( addr < pm_mmio_access[i].start )
> > +            return false;
> > +        if ( addr >= pm_mmio_access[i].start + pm_mmio_access[i].size )
> > +            continue;
> > +
> > +        if ( write && pm_mmio_access[i].readonly )
> > +            return false;
> > +        if ( pm_mmio_access[i].hwdom_access && !is_hardware_domain(d) )
> > +            return false;
> > +        if ( !domain_has_node_access(d, pm_mmio_access[i].node) )
> > +            return false;
> > +
> > +        /* We've got access to this reg (or parts of it).  */
> > +        ret = true;
> > +
> > +        /* Permit write access to selected bits.  */
> > +        prot_mask |= pm_mmio_access[i].mask ?: GENMASK(31, 0);
> 
> The field mask does not seem to exist.

This code is fully gone now


> > +        break;
> > +    }
> > +
> > +    /*
> > +     * Masking only applies to writes: values are safe to read, but not
> > +     * all bits are writeable.
> > +     */
> > +    if ( write )
> > +        *mask &= prot_mask;
> > +
> > +    return ret;
> > +}
> > +#endif
> > +
> >   bool zynqmp_eemi(struct cpu_user_regs *regs)
> >   {
> >       return false;

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.