[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 4/6] xen/arm: zynqmp: eemi access control
Hi Stefano, On 11/08/18 01:01, Stefano Stabellini wrote: From: "Edgar E. Iglesias" <edgar.iglesias@xxxxxxxxxx> From: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx> Introduce data structs to implement basic access controls. Introduce the following three functions: domain_has_node_access: check access to the node domain_has_reset_access: check access to the reset line domain_has_mmio_access: check access to the register Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx> Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx> --- xen/arch/arm/platforms/xilinx-zynqmp-eemi.c | 783 ++++++++++++++++++++++++++++ 1 file changed, 783 insertions(+) diff --git a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c index c3a19e9..62cc15c 100644 --- a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c +++ b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c @@ -16,6 +16,74 @@ * GNU General Public License for more details. */+/*+ * EEMI Power Management API access + * + * Refs: + * https://www.xilinx.com/support/documentation/user_guides/ug1200-eemi-api.pdf + * + * Background: + * The ZynqMP has a subsystem named the PMU with a CPU and special devices + * dedicated to running Power Management Firmware. Other masters in the + * system need to send requests to the PMU in order to for example: + * * Manage power state + * * Configure clocks + * * Program bitstreams for the programmable logic + * * etc + * + * Although the details of the setup are configurable, in the common case + * the PMU lives in the Secure world. NS World cannot directly communicate + * with it and must use proxy services from ARM Trusted Firmware to reach + * the PMU. + * + * Power Management on the ZynqMP is implemented in a layered manner. + * The PMU knows about various masters and will enforce access controls + * based on a pre-configured partitioning. This configuration dictates + * which devices are owned by the various masters and the PMU FW makes sure + * that a given master cannot turn off a device that it does not own or that + * is in use by other masters. + * + * The PMU is not aware of multiple execution states in masters. + * For example, it treats the ARMv8 cores as single units and does not + * distinguish between Secure vs NS OS's nor does it know about Hypervisors + * and multiple guests. It is up to software on the ARMv8 cores to present + * a unified view of its power requirements. + * + * To implement this unified view, ARM Trusted Firmware at EL3 provides + * access to the PM API via SMC calls. ARM Trusted Firmware is responsible + * for mediating between the Secure and the NS world, rejecting SMC calls + * that request changes that are not allowed. + * + * Xen running above ATF owns the NS world and is responsible for presenting + * unified PM requests taking all guests and the hypervisor into account. + * + * Implementation: + * The PM API contains different classes of calls. + * Certain calls are harmless to expose to any guest. + * These include calls to get the PM API Version, or to read out the version + * of the chip we're running on. + * + * In order to correctly virtualize these calls, we need to know if + * guests issuing these calls have ownership of the given device. + * The approach taken here is to map PM API Nodes identifying + * a device into base addresses for registers that belong to that + * same device. + * + * If the guest has access to devices registers, we give the guest + * access to PM API calls that affect that device. This is implemented + * by pm_node_access and domain_has_node_access(). + * + * MMIO access: + * These calls allow guests to access certain memory ranges. These ranges + * are typically protected for secure-world access only and also from + * certain masters only, so guests cannot access them directly. + * Registers within the memory regions affect certain nodes. In this case, + * our input is an address and we map that address into a node. If the + * guest has ownership of that node, the access is allowed. + * Some registers contain bitfields and a single register may contain + * bits that affect multiple nodes. + */ + #include <xen/iocap.h> #include <xen/sched.h> #include <xen/types.h> @@ -23,6 +91,721 @@ #include <asm/regs.h> #include <asm/platforms/xilinx-zynqmp-eemi.h>+struct pm_access+{ + paddr_t addr; It seems that the address will always page-aligned. So could we store a frame using mfn_t? + bool hwdom_access; /* HW domain gets access regardless. */ +}; + +/* + * This table maps a node into a memory address. + * If a guest has access to the address, it has enough control + * over the node to grant it access to EEMI calls for that node. + */ +static const struct pm_access pm_node_access[] = { [...] + +/* + * This table maps reset line IDs into a memory address. + * If a guest has access to the address, it has enough control + * over the affected node to grant it access to EEMI calls for + * resetting that node. + */ +#define XILPM_RESET_IDX(n) (n - XILPM_RESET_PCIE_CFG) +static const struct pm_access pm_reset_access[] = { [...] + +/* + * This table maps reset line IDs into a memory address. + * If a guest has access to the address, it has enough control + * over the affected node to grant it access to EEMI calls for + * resetting that node. + */ +static const struct { + paddr_t start; + paddr_t size; + uint32_t mask; /* Zero means no mask, i.e all bits. */ + enum pm_node_id node; + bool hwdom_access; + bool readonly; +} pm_mmio_access[] = { Those 3 arrays contains a lot of hardcoded value. Can't any of this be detected from the device-tree? I would be interested to know how this is going to work with upstream Linux. Do you hardcode all the values there as well? +static bool pm_check_access(const struct pm_access *acl, struct domain *d, + uint32_t idx) +{ + unsigned long mfn; mfn_t please. The code is not that nice but at least it add more safety in the code. Hopefully iommu_access_permitted will be converted to typesafe MFN soon. + + if ( acl[idx].hwdom_access && is_hardware_domain(d) ) + return true; + + if ( !acl[idx].addr ) + return false; + + mfn = PFN_DOWN(acl[idx].addr); maddr_to_mfn(...); + return iomem_access_permitted(d, mfn, mfn); Is the address something that a guest would be allowed to read/write directly? If not, then iomem_access_permitted(...) should not be used. +} + +/* Check if a domain has access to a node. */ +static bool domain_has_node_access(struct domain *d, uint32_t nodeid) +{ + if ( nodeid > ARRAY_SIZE(pm_node_access) ) + return false; + + return pm_check_access(pm_node_access, d, nodeid); +} + +/* Check if a domain has access to a reset line. */ +static bool domain_has_reset_access(struct domain *d, uint32_t rst) +{ + if ( rst < XILPM_RESET_PCIE_CFG ) + return false; + + rst -= XILPM_RESET_PCIE_CFG; + + if ( rst > ARRAY_SIZE(pm_reset_access) ) + return false; + + return pm_check_access(pm_reset_access, d, rst); +} + +/* + * Check if a given domain has access to perform an indirect + * MMIO access. This sentence seems to confirm a domain cannot do direct MMIO access to that region. So, iomem_access_permitted is definitely not an option here. + * + * If the provided mask is invalid, it will be fixed up. + */ +static bool domain_has_mmio_access(struct domain *d, + bool write, paddr_t addr, + uint32_t *mask) I don't see this function used below, this would lead to a compilation error. Can you make the series is bisectable? +{ + unsigned int i; + bool ret = false; + uint32_t prot_mask = 0; + + /* + * The hardware domain gets read access to everything. + * Lower layers will do further filtering. + */ + if ( !write && is_hardware_domain(d) ) + return true; + + /* Scan the ACL. */ + for ( i = 0; i < ARRAY_SIZE(pm_mmio_access); i++ ) + { + if ( addr < pm_mmio_access[i].start ) + return false; + if ( addr > pm_mmio_access[i].start + pm_mmio_access[i].size ) I would add an ASSERT(pm_mmio_access[i].start >= pm_mmio_access[i].size) to catch wrapping. + continue; + + if ( write && pm_mmio_access[i].readonly ) + return false; + if ( pm_mmio_access[i].hwdom_access && !is_hardware_domain(d) ) + return false; + if ( !domain_has_node_access(d, pm_mmio_access[i].node) ) + return false; + + /* We've got access to this reg (or parts of it). */ + ret = true; + + /* Permit write access to selected bits. */ + prot_mask |= pm_mmio_access[i].mask ?: 0xFFFFFFFF; Can you use GENMASK here? NIT: newline + break; + } + + /* Masking only applies to writes. */ + if ( write ) + *mask &= prot_mask; So for reading there are no masking? What should be the value it? + + return ret; +} + bool zynqmp_eemi(struct cpu_user_regs *regs) { return false; Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |