[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] Merge branch 'arm-next' into staging
=== This changeset includes merge from high-traffic branch === Commits on that branch are not reported individually. commit 65a2c12576a73b67c80a1b4eceff1fa9a4ffa050 Merge: b4ac4bc410222d221dc46a74ac71efaa7b32d57c 302ba0cee8171ee7c12f100f92e122f269d9f0a7 Author: Julien Grall <julien.grall@xxxxxxx> AuthorDate: Wed Jul 4 11:46:11 2018 +0100 Commit: Julien Grall <julien.grall@xxxxxxx> CommitDate: Wed Jul 4 11:46:11 2018 +0100 Merge branch 'arm-next' into staging docs/misc/xen-command-line.markdown | 18 ++ xen/arch/arm/Kconfig | 46 ++--- xen/arch/arm/alternative.c | 86 +++++----- xen/arch/arm/arm64/asm-offsets.c | 2 + xen/arch/arm/arm64/entry.S | 48 +++++- xen/arch/arm/arm64/smpboot.c | 2 +- xen/arch/arm/arm64/vsysreg.c | 4 +- xen/arch/arm/cpuerrata.c | 199 ++++++++++++++++++++++ xen/arch/arm/cpufeature.c | 29 ++++ xen/arch/arm/domain.c | 9 + xen/arch/arm/gic-vgic.c | 2 +- xen/arch/arm/gic.c | 31 ++++ xen/arch/arm/irq.c | 2 +- xen/arch/arm/p2m.c | 53 +++++- xen/arch/arm/platforms/vexpress.c | 35 ---- xen/arch/arm/processor.c | 2 +- xen/arch/arm/psci.c | 13 ++ xen/arch/arm/setup.c | 8 +- xen/arch/arm/smpboot.c | 42 ++++- xen/arch/arm/time.c | 45 +++++ xen/arch/arm/traps.c | 58 +++++-- xen/arch/arm/vgic-v2.c | 2 + xen/arch/arm/vsmc.c | 37 ++++ xen/common/schedule.c | 4 + xen/drivers/video/Kconfig | 3 - xen/drivers/video/Makefile | 1 - xen/drivers/video/arm_hdlcd.c | 281 ------------------------------- xen/include/asm-arm/alternative.h | 44 ++++- xen/include/asm-arm/arm64/macros.h | 25 +++ xen/include/asm-arm/cpuerrata.h | 42 +++++ xen/include/asm-arm/cpufeature.h | 4 +- xen/include/asm-arm/current.h | 6 +- xen/include/asm-arm/macros.h | 2 +- xen/include/asm-arm/platforms/vexpress.h | 6 - xen/include/asm-arm/procinfo.h | 4 +- xen/include/asm-arm/psci.h | 1 + xen/include/asm-arm/smccc.h | 13 +- xen/include/asm-arm/traps.h | 4 + 38 files changed, 785 insertions(+), 428 deletions(-) diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown index ff8c7d4c2f..8a832c0f8b 100644 --- a/docs/misc/xen-command-line.markdown +++ b/docs/misc/xen-command-line.markdown @@ -1758,6 +1758,24 @@ enforces the maximum theoretically necessary timeout of 670ms. Any number is being interpreted as a custom timeout in milliseconds. Zero or boolean false disable the quirk workaround, which is also the default. +### spec-ctrl (Arm) +> `= List of [ ssbd=force-disable|runtime|force-enable ]` + +Controls for speculative execution sidechannel mitigations. + +The option `ssbd=` is used to control the state of Speculative Store +Bypass Disable (SSBD) mitigation. + +* `ssbd=force-disable` will keep the mitigation permanently off. The guest +will not be able to control the state of the mitigation. +* `ssbd=runtime` will always turn on the mitigation when running in the +hypervisor context. The guest will be to turn on/off the mitigation for +itself by using the firmware interface ARCH\_WORKAROUND\_2. +* `ssbd=force-enable` will keep the mitigation permanently on. The guest will +not be able to control the state of the mitigation. + +By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`). + ### spec-ctrl (x86) > `= List of [ <bool>, xen=<bool>, {pv,hvm,msr-sc,rsb}=<bool>, > bti-thunk=retpoline|lfence|jmp, > {ibrs,ibpb,ssbd,eager-fpu}=<bool> ]` diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 8174c0c635..06ba4a4d6e 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -17,12 +17,10 @@ config ARM_64 config ARM def_bool y select HAS_ALTERNATIVE - select HAS_ARM_HDLCD select HAS_DEVICE_TREE select HAS_MEM_ACCESS select HAS_PASSTHROUGH select HAS_PDX - select VIDEO config ARCH_DEFCONFIG string @@ -73,6 +71,33 @@ config SBSA_VUART_CONSOLE Allows a guest to use SBSA Generic UART as a console. The SBSA Generic UART implements a subset of ARM PL011 UART. +config ARM_SSBD + bool "Speculative Store Bypass Disable" if EXPERT = "y" + depends on HAS_ALTERNATIVE + default y + help + This enables mitigation of bypassing of previous stores by speculative + loads. + + If unsure, say Y. + +config HARDEN_BRANCH_PREDICTOR + bool "Harden the branch predictor against aliasing attacks" if EXPERT = "y" + default y + help + Speculation attacks against some high-performance processors rely on + being able to manipulate the branch predictor for a victim context by + executing aliasing branches in the attacker context. Such attacks + can be partially mitigated against by clearing internal branch + predictor state and limiting the prediction logic in some situations. + + This config option will take CPU-specific actions to harden the + branch predictor against aliasing attacks and may rely on specific + instruction sequences or control bits being set by the system + firmware. + + If unsure, say Y. + endmenu menu "ARM errata workaround via the alternative framework" @@ -187,23 +212,6 @@ config ARM64_ERRATUM_834220 endmenu -config HARDEN_BRANCH_PREDICTOR - bool "Harden the branch predictor against aliasing attacks" if EXPERT - default y - help - Speculation attacks against some high-performance processors rely on - being able to manipulate the branch predictor for a victim context by - executing aliasing branches in the attacker context. Such attacks - can be partially mitigated against by clearing internal branch - predictor state and limiting the prediction logic in some situations. - - This config option will take CPU-specific actions to harden the - branch predictor against aliasing attacks and may rely on specific - instruction sequences or control bits being set by the system - firmware. - - If unsure, say Y. - config ARM64_HARDEN_BRANCH_PREDICTOR def_bool y if ARM_64 && HARDEN_BRANCH_PREDICTOR diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c index 9ffdc475d6..52ed7edf69 100644 --- a/xen/arch/arm/alternative.c +++ b/xen/arch/arm/alternative.c @@ -30,6 +30,8 @@ #include <asm/byteorder.h> #include <asm/cpufeature.h> #include <asm/insn.h> +/* XXX: Move ARCH_PATCH_INSN_SIZE out of livepatch.h */ +#include <asm/livepatch.h> #include <asm/page.h> /* Override macros from asm/page.h to make them work with mfn_t */ @@ -94,39 +96,66 @@ static u32 get_alt_insn(const struct alt_instr *alt, return insn; } +static void patch_alternative(const struct alt_instr *alt, + const uint32_t *origptr, + uint32_t *updptr, int nr_inst) +{ + const uint32_t *replptr; + unsigned int i; + + replptr = ALT_REPL_PTR(alt); + for ( i = 0; i < nr_inst; i++ ) + { + uint32_t insn; + + insn = get_alt_insn(alt, origptr + i, replptr + i); + updptr[i] = cpu_to_le32(insn); + } +} + /* * The region patched should be read-write to allow __apply_alternatives * to replacing the instructions when necessary. + * + * @update_offset: Offset between the region patched and the writable + * region for the update. 0 if the patched region is writable. */ -static int __apply_alternatives(const struct alt_region *region) +static int __apply_alternatives(const struct alt_region *region, + paddr_t update_offset) { const struct alt_instr *alt; - const u32 *replptr; - u32 *origptr; + const u32 *origptr; + u32 *updptr; + alternative_cb_t alt_cb; printk(XENLOG_INFO "alternatives: Patching with alt table %p -> %p\n", region->begin, region->end); for ( alt = region->begin; alt < region->end; alt++ ) { - u32 insn; - int i, nr_inst; + int nr_inst; - if ( !cpus_have_cap(alt->cpufeature) ) + /* Use ARM_CB_PATCH as an unconditional patch */ + if ( alt->cpufeature < ARM_CB_PATCH && + !cpus_have_cap(alt->cpufeature) ) continue; - BUG_ON(alt->alt_len != alt->orig_len); + if ( alt->cpufeature == ARM_CB_PATCH ) + BUG_ON(alt->alt_len != 0); + else + BUG_ON(alt->alt_len != alt->orig_len); origptr = ALT_ORIG_PTR(alt); - replptr = ALT_REPL_PTR(alt); + updptr = (void *)origptr + update_offset; - nr_inst = alt->alt_len / sizeof(insn); + nr_inst = alt->orig_len / ARCH_PATCH_INSN_SIZE; - for ( i = 0; i < nr_inst; i++ ) - { - insn = get_alt_insn(alt, origptr + i, replptr + i); - *(origptr + i) = cpu_to_le32(insn); - } + if ( alt->cpufeature < ARM_CB_PATCH ) + alt_cb = patch_alternative; + else + alt_cb = ALT_REPL_PTR(alt); + + alt_cb(alt, origptr, updptr, nr_inst); /* Ensure the new instructions reached the memory and nuke */ clean_and_invalidate_dcache_va_range(origptr, @@ -162,9 +191,6 @@ static int __apply_alternatives_multi_stop(void *unused) paddr_t xen_size = _end - _start; unsigned int xen_order = get_order_from_bytes(xen_size); void *xenmap; - struct virtual_region patch_region = { - .list = LIST_HEAD_INIT(patch_region.list), - }; BUG_ON(patched); @@ -177,31 +203,13 @@ static int __apply_alternatives_multi_stop(void *unused) /* Re-mapping Xen is not expected to fail during boot. */ BUG_ON(!xenmap); - /* - * If we generate a new branch instruction, the target will be - * calculated in this re-mapped Xen region. So we have to register - * this re-mapped Xen region as a virtual region temporarily. - */ - patch_region.start = xenmap; - patch_region.end = xenmap + xen_size; - register_virtual_region(&patch_region); - - /* - * Find the virtual address of the alternative region in the new - * mapping. - * alt_instr contains relative offset, so the function - * __apply_alternatives will patch in the re-mapped version of - * Xen. - */ - region.begin = (void *)__alt_instructions - (void *)_start + xenmap; - region.end = (void *)__alt_instructions_end - (void *)_start + xenmap; + region.begin = __alt_instructions; + region.end = __alt_instructions_end; - ret = __apply_alternatives(®ion); + ret = __apply_alternatives(®ion, xenmap - (void *)_start); /* The patching is not expected to fail during boot. */ BUG_ON(ret != 0); - unregister_virtual_region(&patch_region); - vunmap(xenmap); /* Barriers provided by the cache flushing */ @@ -235,7 +243,7 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en .end = end, }; - return __apply_alternatives(®ion); + return __apply_alternatives(®ion, 0); } /* diff --git a/xen/arch/arm/arm64/asm-offsets.c b/xen/arch/arm/arm64/asm-offsets.c index ce24e44473..f5c696d092 100644 --- a/xen/arch/arm/arm64/asm-offsets.c +++ b/xen/arch/arm/arm64/asm-offsets.c @@ -22,6 +22,7 @@ void __dummy__(void) { OFFSET(UREGS_X0, struct cpu_user_regs, x0); + OFFSET(UREGS_X1, struct cpu_user_regs, x1); OFFSET(UREGS_LR, struct cpu_user_regs, lr); OFFSET(UREGS_SP, struct cpu_user_regs, sp); @@ -45,6 +46,7 @@ void __dummy__(void) BLANK(); DEFINE(CPUINFO_sizeof, sizeof(struct cpu_info)); + OFFSET(CPUINFO_flags, struct cpu_info, flags); OFFSET(VCPU_arch_saved_context, struct vcpu, arch.saved_context); diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S index ffa9a1c492..97b05f53ea 100644 --- a/xen/arch/arm/arm64/entry.S +++ b/xen/arch/arm/arm64/entry.S @@ -1,4 +1,6 @@ #include <asm/asm_defns.h> +#include <asm/current.h> +#include <asm/macros.h> #include <asm/regs.h> #include <asm/alternative.h> #include <asm/smccc.h> @@ -226,11 +228,11 @@ guest_sync: mrs x1, esr_el2 lsr x1, x1, #HSR_EC_SHIFT /* x1 = ESR_EL2.EC */ cmp x1, #HSR_EC_HVC64 - b.ne 1f /* Not a HVC skip fastpath. */ + b.ne guest_sync_slowpath /* Not a HVC skip fastpath. */ mrs x1, esr_el2 and x1, x1, #0xffff /* Check the immediate [0:16] */ - cbnz x1, 1f /* should be 0 for HVC #0 */ + cbnz x1, guest_sync_slowpath /* should be 0 for HVC #0 */ /* * Fastest path possible for ARM_SMCCC_ARCH_WORKAROUND_1. @@ -241,7 +243,7 @@ guest_sync: * be encoded as an immediate for cmp. */ eor w0, w0, #ARM_SMCCC_ARCH_WORKAROUND_1_FID - cbnz w0, 1f + cbnz w0, check_wa2 /* * Clobber both x0 and x1 to prevent leakage. Note that thanks @@ -250,7 +252,45 @@ guest_sync: mov x1, xzr eret -1: +check_wa2: + /* ARM_SMCCC_ARCH_WORKAROUND_2 handling */ + eor w0, w0, #(ARM_SMCCC_ARCH_WORKAROUND_1_FID ^ ARM_SMCCC_ARCH_WORKAROUND_2_FID) + cbnz w0, guest_sync_slowpath +#ifdef CONFIG_ARM_SSBD +alternative_cb arm_enable_wa2_handling + b wa2_end +alternative_cb_end + /* Sanitize the argument */ + mov x0, #-(UREGS_kernel_sizeof - UREGS_X1) /* x0 := offset of guest's x1 on the stack */ + ldr x1, [sp, x0] /* Load guest's x1 */ + cmp w1, wzr + cset x1, ne + + /* + * Update the guest flag. At this stage sp point after the field + * guest_cpu_user_regs in cpu_info. + */ + adr_cpu_info x2 + ldr x0, [x2, #CPUINFO_flags] + bfi x0, x1, #CPUINFO_WORKAROUND_2_FLAG_SHIFT, #1 + str x0, [x2, #CPUINFO_flags] + + /* Check that we actually need to perform the call */ + ldr_this_cpu x0, ssbd_callback_required, x2 + cbz x0, wa2_end + + mov w0, #ARM_SMCCC_ARCH_WORKAROUND_2_FID + smc #0 + +wa2_end: + /* Don't leak data from the SMC call */ + mov x1, xzr + mov x2, xzr + mov x3, xzr +#endif /* !CONFIG_ARM_SSBD */ + mov x0, xzr + eret +guest_sync_slowpath: /* * x0/x1 may have been scratch by the fast path above, so avoid * to save them. diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c index 4fd0ac68b7..694fbf67e6 100644 --- a/xen/arch/arm/arm64/smpboot.c +++ b/xen/arch/arm/arm64/smpboot.c @@ -104,7 +104,7 @@ int __init arch_cpu_init(int cpu, struct dt_device_node *dn) return smp_psci_init(cpu); } -int __init arch_cpu_up(int cpu) +int arch_cpu_up(int cpu) { if ( !smp_enable_ops[cpu].prepare_cpu ) return -ENODEV; diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c index c57ac12503..6e60824572 100644 --- a/xen/arch/arm/arm64/vsysreg.c +++ b/xen/arch/arm/arm64/vsysreg.c @@ -57,13 +57,15 @@ void do_sysreg(struct cpu_user_regs *regs, * ARMv8 (DDI 0487A.d): D1-1509 Table D1-58 * * Unhandled: - * OSLSR_EL1 * DBGPRCR_EL1 */ case HSR_SYSREG_OSLAR_EL1: return handle_wo_wi(regs, regidx, hsr.sysreg.read, hsr, 1); case HSR_SYSREG_OSDLR_EL1: return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1); + case HSR_SYSREG_OSLSR_EL1: + return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1, + 1 << 3); /* * MDCR_EL2.TDA diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c index 1baa20654b..97a118293b 100644 --- a/xen/arch/arm/cpuerrata.c +++ b/xen/arch/arm/cpuerrata.c @@ -1,3 +1,4 @@ +#include <xen/cpu.h> #include <xen/cpumask.h> #include <xen/mm.h> #include <xen/sizes.h> @@ -5,8 +6,10 @@ #include <xen/spinlock.h> #include <xen/vmap.h> #include <xen/warning.h> +#include <xen/notifier.h> #include <asm/cpufeature.h> #include <asm/cpuerrata.h> +#include <asm/insn.h> #include <asm/psci.h> /* Override macros from asm/page.h to make them work with mfn_t */ @@ -235,6 +238,148 @@ static int enable_ic_inv_hardening(void *data) #endif +#ifdef CONFIG_ARM_SSBD + +enum ssbd_state ssbd_state = ARM_SSBD_RUNTIME; + +static int __init parse_spec_ctrl(const char *s) +{ + const char *ss; + int rc = 0; + + do { + ss = strchr(s, ','); + if ( !ss ) + ss = strchr(s, '\0'); + + if ( !strncmp(s, "ssbd=", 5) ) + { + s += 5; + + if ( !strncmp(s, "force-disable", ss - s) ) + ssbd_state = ARM_SSBD_FORCE_DISABLE; + else if ( !strncmp(s, "runtime", ss - s) ) + ssbd_state = ARM_SSBD_RUNTIME; + else if ( !strncmp(s, "force-enable", ss - s) ) + ssbd_state = ARM_SSBD_FORCE_ENABLE; + else + rc = -EINVAL; + } + else + rc = -EINVAL; + + s = ss + 1; + } while ( *ss ); + + return rc; +} +custom_param("spec-ctrl", parse_spec_ctrl); + +/* Arm64 only for now as for Arm32 the workaround is currently handled in C. */ +#ifdef CONFIG_ARM_64 +void __init arm_enable_wa2_handling(const struct alt_instr *alt, + const uint32_t *origptr, + uint32_t *updptr, int nr_inst) +{ + BUG_ON(nr_inst != 1); + + /* + * Only allow mitigation on guest ARCH_WORKAROUND_2 if the SSBD + * state allow it to be flipped. + */ + if ( get_ssbd_state() == ARM_SSBD_RUNTIME ) + *updptr = aarch64_insn_gen_nop(); +} +#endif + +/* + * Assembly code may use the variable directly, so we need to make sure + * it fits in a register. + */ +DEFINE_PER_CPU_READ_MOSTLY(register_t, ssbd_callback_required); + +static bool has_ssbd_mitigation(const struct arm_cpu_capabilities *entry) +{ + struct arm_smccc_res res; + bool required; + + if ( smccc_ver < SMCCC_VERSION(1, 1) ) + return false; + + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FID, + ARM_SMCCC_ARCH_WORKAROUND_2_FID, &res); + + switch ( (int)res.a0 ) + { + case ARM_SMCCC_NOT_SUPPORTED: + ssbd_state = ARM_SSBD_UNKNOWN; + return false; + + case ARM_SMCCC_NOT_REQUIRED: + ssbd_state = ARM_SSBD_MITIGATED; + return false; + + case ARM_SMCCC_SUCCESS: + required = true; + break; + + case 1: /* Mitigation not required on this CPU. */ + required = false; + break; + + default: + ASSERT_UNREACHABLE(); + return false; + } + + switch ( ssbd_state ) + { + case ARM_SSBD_FORCE_DISABLE: + { + static bool once = true; + + if ( once ) + printk("%s disabled from command-line\n", entry->desc); + once = false; + + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 0, NULL); + required = false; + + break; + } + + case ARM_SSBD_RUNTIME: + if ( required ) + { + this_cpu(ssbd_callback_required) = 1; + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 1, NULL); + } + + break; + + case ARM_SSBD_FORCE_ENABLE: + { + static bool once = true; + + if ( once ) + printk("%s forced from command-line\n", entry->desc); + once = false; + + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 1, NULL); + required = true; + + break; + } + + default: + ASSERT_UNREACHABLE(); + return false; + } + + return required; +} +#endif + #define MIDR_RANGE(model, min, max) \ .matches = is_affected_midr_range, \ .midr_model = model, \ @@ -336,6 +481,13 @@ static const struct arm_cpu_capabilities arm_errata[] = { .enable = enable_ic_inv_hardening, }, #endif +#ifdef CONFIG_ARM_SSBD + { + .desc = "Speculative Store Bypass Disabled", + .capability = ARM_SSBD, + .matches = has_ssbd_mitigation, + }, +#endif {}, }; @@ -349,6 +501,53 @@ void __init enable_errata_workarounds(void) enable_cpu_capabilities(arm_errata); } +static int cpu_errata_callback(struct notifier_block *nfb, + unsigned long action, + void *hcpu) +{ + int rc = 0; + + switch ( action ) + { + case CPU_STARTING: + /* + * At CPU_STARTING phase no notifier shall return an error, because the + * system is designed with the assumption that starting a CPU cannot + * fail at this point. If an error happens here it will cause Xen to hit + * the BUG_ON() in notify_cpu_starting(). In future, either this + * notifier/enabling capabilities should be fixed to always return + * success/void or notify_cpu_starting() and other common code should be + * fixed to expect an error at CPU_STARTING phase. + */ + ASSERT(system_state != SYS_STATE_boot); + rc = enable_nonboot_cpu_caps(arm_errata); + break; + default: + break; + } + + return !rc ? NOTIFY_DONE : notifier_from_errno(rc); +} + +static struct notifier_block cpu_errata_nfb = { + .notifier_call = cpu_errata_callback, +}; + +static int __init cpu_errata_notifier_init(void) +{ + register_cpu_notifier(&cpu_errata_nfb); + + return 0; +} +/* + * Initialization has to be done at init rather than presmp_init phase because + * the callback should execute only after the secondary CPUs are initially + * booted (in hotplug scenarios when the system state is not boot). On boot, + * the enabling of errata workarounds will be triggered by the boot CPU from + * start_xen(). + */ +__initcall(cpu_errata_notifier_init); + /* * Local variables: * mode: C diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c index 525b45e22f..3aaff4c0e6 100644 --- a/xen/arch/arm/cpufeature.c +++ b/xen/arch/arm/cpufeature.c @@ -69,6 +69,35 @@ void __init enable_cpu_capabilities(const struct arm_cpu_capabilities *caps) } /* + * Run through the enabled capabilities and enable() them on the calling CPU. + * If enabling of any capability fails the error is returned. After enabling a + * capability fails the error will be remembered into 'rc' and the remaining + * capabilities will be enabled. If enabling multiple capabilities fail the + * error returned by this function represents the error code of the last + * failure. + */ +int enable_nonboot_cpu_caps(const struct arm_cpu_capabilities *caps) +{ + int rc = 0; + + for ( ; caps->matches; caps++ ) + { + if ( !cpus_have_cap(caps->capability) ) + continue; + + if ( caps->enable ) + { + int ret = caps->enable((void *)caps); + + if ( ret ) + rc = ret; + } + } + + return rc; +} + +/* * Local variables: * mode: C * c-file-style: "BSD" diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index ec0f042bf7..4baecc2447 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -21,6 +21,7 @@ #include <xen/wait.h> #include <asm/alternative.h> +#include <asm/cpuerrata.h> #include <asm/cpufeature.h> #include <asm/current.h> #include <asm/event.h> @@ -550,6 +551,7 @@ int vcpu_initialise(struct vcpu *v) v->arch.cpu_info = (struct cpu_info *)(v->arch.stack + STACK_SIZE - sizeof(struct cpu_info)); + memset(v->arch.cpu_info, 0, sizeof(*v->arch.cpu_info)); memset(&v->arch.saved_context, 0, sizeof(v->arch.saved_context)); v->arch.saved_context.sp = (register_t)v->arch.cpu_info; @@ -571,6 +573,13 @@ int vcpu_initialise(struct vcpu *v) if ( (rc = vcpu_vtimer_init(v)) != 0 ) goto fail; + /* + * The workaround 2 (i.e SSBD mitigation) is enabled by default if + * supported. + */ + if ( get_ssbd_state() == ARM_SSBD_RUNTIME ) + v->arch.cpu_info->flags |= CPUINFO_WORKAROUND_2_FLAG; + return rc; fail: diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c index d831b35525..fd63906e9b 100644 --- a/xen/arch/arm/gic-vgic.c +++ b/xen/arch/arm/gic-vgic.c @@ -362,7 +362,7 @@ int vgic_vcpu_pending_irq(struct vcpu *v) ASSERT(v == current); mask_priority = gic_hw_ops->read_vmcr_priority(); - active_priority = find_next_bit(&apr, 32, 0); + active_priority = find_first_bit(&apr, 32); spin_lock_irqsave(&v->arch.vgic.lock, flags); diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 653a815127..5474030386 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -27,6 +27,8 @@ #include <xen/list.h> #include <xen/device_tree.h> #include <xen/acpi.h> +#include <xen/cpu.h> +#include <xen/notifier.h> #include <asm/p2m.h> #include <asm/domain.h> #include <asm/platform.h> @@ -462,6 +464,35 @@ int gic_iomem_deny_access(const struct domain *d) return gic_hw_ops->iomem_deny_access(d); } +static int cpu_gic_callback(struct notifier_block *nfb, + unsigned long action, + void *hcpu) +{ + switch ( action ) + { + case CPU_DYING: + /* This is reverting the work done in init_maintenance_interrupt */ + release_irq(gic_hw_ops->info->maintenance_irq, NULL); + break; + default: + break; + } + + return NOTIFY_DONE; +} + +static struct notifier_block cpu_gic_nfb = { + .notifier_call = cpu_gic_callback, +}; + +static int __init cpu_gic_notifier_init(void) +{ + register_cpu_notifier(&cpu_gic_nfb); + + return 0; +} +__initcall(cpu_gic_notifier_init); + /* * Local variables: * mode: C diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c index aa4e832cae..098281f8ab 100644 --- a/xen/arch/arm/irq.c +++ b/xen/arch/arm/irq.c @@ -65,7 +65,7 @@ irq_desc_t *__irq_to_desc(int irq) return &irq_desc[irq-NR_LOCAL_IRQS]; } -int __init arch_init_one_irq_desc(struct irq_desc *desc) +int arch_init_one_irq_desc(struct irq_desc *desc) { desc->arch.type = IRQ_TYPE_INVALID; return 0; diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index d43c3aa896..14791388ad 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -8,6 +8,8 @@ #include <xen/iocap.h> #include <xen/mem_access.h> #include <xen/xmalloc.h> +#include <xen/cpu.h> +#include <xen/notifier.h> #include <public/vm_event.h> #include <asm/flushtlb.h> #include <asm/event.h> @@ -1451,10 +1453,12 @@ err: return page; } -static void __init setup_virt_paging_one(void *data) +/* VTCR value to be configured by all CPUs. Set only once by the boot CPU */ +static uint32_t __read_mostly vtcr; + +static void setup_virt_paging_one(void *data) { - unsigned long val = (unsigned long)data; - WRITE_SYSREG32(val, VTCR_EL2); + WRITE_SYSREG32(vtcr, VTCR_EL2); isb(); } @@ -1538,10 +1542,49 @@ void __init setup_virt_paging(void) /* It is not allowed to concatenate a level zero root */ BUG_ON( P2M_ROOT_LEVEL == 0 && P2M_ROOT_ORDER > 0 ); - setup_virt_paging_one((void *)val); - smp_call_function(setup_virt_paging_one, (void *)val, 1); + vtcr = val; + setup_virt_paging_one(NULL); + smp_call_function(setup_virt_paging_one, NULL, 1); +} + +static int cpu_virt_paging_callback(struct notifier_block *nfb, + unsigned long action, + void *hcpu) +{ + switch ( action ) + { + case CPU_STARTING: + ASSERT(system_state != SYS_STATE_boot); + setup_virt_paging_one(NULL); + break; + default: + break; + } + + return NOTIFY_DONE; } +static struct notifier_block cpu_virt_paging_nfb = { + .notifier_call = cpu_virt_paging_callback, +}; + +static int __init cpu_virt_paging_init(void) +{ + register_cpu_notifier(&cpu_virt_paging_nfb); + + return 0; +} +/* + * Initialization of the notifier has to be done at init rather than presmp_init + * phase because: the registered notifier is used to setup virtual paging for + * non-boot CPUs after the initial virtual paging for all CPUs is already setup, + * i.e. when a non-boot CPU is hotplugged after the system has booted. In other + * words, the notifier should be registered after the virtual paging is + * initially setup (setup_virt_paging() is called from start_xen()). This is + * required because vtcr config value has to be set before a notifier can fire. + */ +__initcall(cpu_virt_paging_init); + /* * Local variables: * mode: C diff --git a/xen/arch/arm/platforms/vexpress.c b/xen/arch/arm/platforms/vexpress.c index 70839d676f..b6193f75b5 100644 --- a/xen/arch/arm/platforms/vexpress.c +++ b/xen/arch/arm/platforms/vexpress.c @@ -59,41 +59,6 @@ static inline int vexpress_ctrl_start(uint32_t *syscfg, int write, return 0; } -int vexpress_syscfg(int write, int function, int device, uint32_t *data) -{ - uint32_t *syscfg = (uint32_t *) FIXMAP_ADDR(FIXMAP_MISC); - int ret = -1; - - set_fixmap(FIXMAP_MISC, maddr_to_mfn(V2M_SYS_MMIO_BASE), - PAGE_HYPERVISOR_NOCACHE); - - if ( syscfg[V2M_SYS_CFGCTRL/4] & V2M_SYS_CFG_START ) - goto out; - - /* clear the complete bit in the V2M_SYS_CFGSTAT status register */ - syscfg[V2M_SYS_CFGSTAT/4] = 0; - - if ( write ) - { - /* write data */ - syscfg[V2M_SYS_CFGDATA/4] = *data; - - if ( vexpress_ctrl_start(syscfg, write, function, device) < 0 ) - goto out; - } else { - if ( vexpress_ctrl_start(syscfg, write, function, device) < 0 ) - goto out; - else - /* read data */ - *data = syscfg[V2M_SYS_CFGDATA/4]; - } - - ret = 0; -out: - clear_fixmap(FIXMAP_MISC); - return ret; -} - /* * TODO: Get base address from the device tree * See arm,vexpress-reset node diff --git a/xen/arch/arm/processor.c b/xen/arch/arm/processor.c index ce4385064a..acad8b31d6 100644 --- a/xen/arch/arm/processor.c +++ b/xen/arch/arm/processor.c @@ -20,7 +20,7 @@ static DEFINE_PER_CPU(struct processor *, processor); -void __init processor_setup(void) +void processor_setup(void) { const struct proc_info_list *procinfo; diff --git a/xen/arch/arm/psci.c b/xen/arch/arm/psci.c index 94b616df9b..3cf5ecf0f3 100644 --- a/xen/arch/arm/psci.c +++ b/xen/arch/arm/psci.c @@ -46,6 +46,19 @@ int call_psci_cpu_on(int cpu) return call_smc(psci_cpu_on_nr, cpu_logical_map(cpu), __pa(init_secondary), 0); } +void call_psci_cpu_off(void) +{ + if ( psci_ver > PSCI_VERSION(0, 1) ) + { + int errno; + + /* If successfull the PSCI cpu_off call doesn't return */ + errno = call_smc(PSCI_0_2_FN32_CPU_OFF, 0, 0, 0); + panic("PSCI cpu off failed for CPU%d err=%d\n", smp_processor_id(), + errno); + } +} + void call_psci_system_off(void) { if ( psci_ver > PSCI_VERSION(0, 1) ) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 216572fbb2..3a75cb2a34 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -171,8 +171,6 @@ static void __init processor_id(void) } processor_setup(); - - check_local_cpu_errata(); } void dt_unreserved_regions(paddr_t s, paddr_t e, @@ -779,6 +777,12 @@ void __init start_xen(unsigned long boot_phys_offset, printk(XENLOG_INFO "SMP: Allowing %u CPUs\n", cpus); nr_cpu_ids = cpus; + /* + * Some errata relies on SMCCC version which is detected by psci_init() + * (called from smp_init_cpus()). + */ + check_local_cpu_errata(); + init_xen_time(); gic_init(); diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c index b2116f0d2d..cf3a4ce659 100644 --- a/xen/arch/arm/smpboot.c +++ b/xen/arch/arm/smpboot.c @@ -52,8 +52,8 @@ nodemask_t __read_mostly node_online_map = { { [0] = 1UL } }; static unsigned char __initdata cpu0_boot_stack[STACK_SIZE] __attribute__((__aligned__(STACK_SIZE))); -/* Initial boot cpu data */ -struct init_info __initdata init_data = +/* Boot cpu data */ +struct init_info init_data = { .stack = cpu0_boot_stack, }; @@ -89,6 +89,12 @@ static void setup_cpu_sibling_map(int cpu) cpumask_set_cpu(cpu, per_cpu(cpu_core_mask, cpu)); } +static void remove_cpu_sibling_map(int cpu) +{ + free_cpumask_var(per_cpu(cpu_sibling_mask, cpu)); + free_cpumask_var(per_cpu(cpu_core_mask, cpu)); +} + void __init smp_clear_cpu_maps (void) { @@ -395,6 +401,8 @@ void stop_cpu(void) /* Make sure the write happens before we sleep forever */ dsb(sy); isb(); + call_psci_cpu_off(); + while ( 1 ) wfi(); } @@ -497,6 +505,36 @@ void __cpu_die(unsigned int cpu) smp_mb(); } +static int cpu_smpboot_callback(struct notifier_block *nfb, + unsigned long action, + void *hcpu) +{ + unsigned int cpu = (unsigned long)hcpu; + + switch ( action ) + { + case CPU_DEAD: + remove_cpu_sibling_map(cpu); + break; + default: + break; + } + + return NOTIFY_DONE; +} + +static struct notifier_block cpu_smpboot_nfb = { + .notifier_call = cpu_smpboot_callback, +}; + +static int __init cpu_smpboot_notifier_init(void) +{ + register_cpu_notifier(&cpu_smpboot_nfb); + + return 0; +} +presmp_initcall(cpu_smpboot_notifier_init); + /* * Local variables: * mode: C diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c index c11fcfeadd..1635c8822d 100644 --- a/xen/arch/arm/time.c +++ b/xen/arch/arm/time.c @@ -29,6 +29,8 @@ #include <xen/sched.h> #include <xen/event.h> #include <xen/acpi.h> +#include <xen/cpu.h> +#include <xen/notifier.h> #include <asm/system.h> #include <asm/time.h> #include <asm/vgic.h> @@ -312,6 +314,21 @@ void init_timer_interrupt(void) check_timer_irq_cfg(timer_irq[TIMER_PHYS_NONSECURE_PPI], "NS-physical"); } +/* + * Revert actions done in init_timer_interrupt that are required to properly + * disable this CPU. + */ +static void deinit_timer_interrupt(void) +{ + WRITE_SYSREG32(0, CNTP_CTL_EL0); /* Disable physical timer */ + WRITE_SYSREG32(0, CNTHP_CTL_EL2); /* Disable hypervisor's timer */ + isb(); + + release_irq(timer_irq[TIMER_HYP_PPI], NULL); + release_irq(timer_irq[TIMER_VIRT_PPI], NULL); + release_irq(timer_irq[TIMER_PHYS_NONSECURE_PPI], NULL); +} + /* Wait a set number of microseconds */ void udelay(unsigned long usecs) { @@ -340,6 +357,34 @@ void domain_set_time_offset(struct domain *d, int64_t time_offset_seconds) /* XXX update guest visible wallclock time */ } +static int cpu_time_callback(struct notifier_block *nfb, + unsigned long action, + void *hcpu) +{ + switch ( action ) + { + case CPU_DYING: + deinit_timer_interrupt(); + break; + default: + break; + } + + return NOTIFY_DONE; +} + +static struct notifier_block cpu_time_nfb = { + .notifier_call = cpu_time_callback, +}; + +static int __init cpu_time_notifier_init(void) +{ + register_cpu_notifier(&cpu_time_nfb); + + return 0; +} +__initcall(cpu_time_notifier_init); + /* * Local variables: * mode: C diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 5c18e918b0..9ae64ae6fc 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1739,12 +1739,13 @@ void handle_wo_wi(struct cpu_user_regs *regs, advance_pc(regs, hsr); } -/* Read only as read as zero */ -void handle_ro_raz(struct cpu_user_regs *regs, - int regidx, - bool read, - const union hsr hsr, - int min_el) +/* Read only as value provided with 'val' argument of this function */ +void handle_ro_read_val(struct cpu_user_regs *regs, + int regidx, + bool read, + const union hsr hsr, + int min_el, + register_t val) { ASSERT((min_el == 0) || (min_el == 1)); @@ -1753,13 +1754,22 @@ void handle_ro_raz(struct cpu_user_regs *regs, if ( !read ) return inject_undef_exception(regs, hsr); - /* else: raz */ - set_user_reg(regs, regidx, 0); + set_user_reg(regs, regidx, val); advance_pc(regs, hsr); } +/* Read only as read as zero */ +inline void handle_ro_raz(struct cpu_user_regs *regs, + int regidx, + bool read, + const union hsr hsr, + int min_el) +{ + handle_ro_read_val(regs, regidx, read, hsr, min_el, 0); +} + void dump_guest_s1_walk(struct domain *d, vaddr_t addr) { register_t ttbcr = READ_SYSREG(TCR_EL1); @@ -2011,18 +2021,33 @@ inject_abt: inject_iabt_exception(regs, gva, hsr.len); } +static inline bool needs_ssbd_flip(struct vcpu *v) +{ + if ( !check_workaround_ssbd() ) + return false; + + return !(v->arch.cpu_info->flags & CPUINFO_WORKAROUND_2_FLAG) && + cpu_require_ssbd_mitigation(); +} + static void enter_hypervisor_head(struct cpu_user_regs *regs) { if ( guest_mode(regs) ) { + struct vcpu *v = current; + + /* If the guest has disabled the workaround, bring it back on. */ + if ( needs_ssbd_flip(v) ) + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 1, NULL); + /* * If we pended a virtual abort, preserve it until it gets cleared. * See ARM ARM DDI 0487A.j D1.14.3 (Virtual Interrupts) for details, * but the crucial bit is "On taking a vSError interrupt, HCR_EL2.VSE * (alias of HCR.VA) is cleared to 0." */ - if ( current->arch.hcr_el2 & HCR_VA ) - current->arch.hcr_el2 = READ_SYSREG(HCR_EL2); + if ( v->arch.hcr_el2 & HCR_VA ) + v->arch.hcr_el2 = READ_SYSREG(HCR_EL2); #ifdef CONFIG_NEW_VGIC /* @@ -2032,11 +2057,11 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs) * TODO: Investigate whether this is necessary to do on every * trap and how it can be optimised. */ - vtimer_update_irqs(current); - vcpu_update_evtchn_irq(current); + vtimer_update_irqs(v); + vcpu_update_evtchn_irq(v); #endif - vgic_sync_from_lrs(current); + vgic_sync_from_lrs(v); } } @@ -2260,6 +2285,13 @@ void leave_hypervisor_tail(void) */ SYNCHRONIZE_SERROR(SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT); + /* + * The hypervisor runs with the workaround always present. + * If the guest wants it disabled, so be it... + */ + if ( needs_ssbd_flip(current) ) + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 0, NULL); + return; } local_irq_enable(); diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c index 646d1f3d12..f6c11f1e41 100644 --- a/xen/arch/arm/vgic-v2.c +++ b/xen/arch/arm/vgic-v2.c @@ -485,6 +485,8 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info, case VRANGE32(GICD_ISACTIVER, GICD_ISACTIVERN): if ( dabt.size != DABT_WORD ) goto bad_width; + if ( r == 0 ) + goto write_ignore_32; printk(XENLOG_G_ERR "%pv: vGICD: unhandled word write %#"PRIregister" to ISACTIVER%d\n", v, r, gicd_reg - GICD_ISACTIVER); diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c index 40a80d5760..c4ccae6030 100644 --- a/xen/arch/arm/vsmc.c +++ b/xen/arch/arm/vsmc.c @@ -18,6 +18,7 @@ #include <xen/lib.h> #include <xen/types.h> #include <public/arch-arm/smccc.h> +#include <asm/cpuerrata.h> #include <asm/cpufeature.h> #include <asm/monitor.h> #include <asm/regs.h> @@ -104,6 +105,23 @@ static bool handle_arch(struct cpu_user_regs *regs) if ( cpus_have_cap(ARM_HARDEN_BRANCH_PREDICTOR) ) ret = 0; break; + case ARM_SMCCC_ARCH_WORKAROUND_2_FID: + switch ( get_ssbd_state() ) + { + case ARM_SSBD_UNKNOWN: + case ARM_SSBD_FORCE_DISABLE: + break; + + case ARM_SSBD_RUNTIME: + ret = ARM_SMCCC_SUCCESS; + break; + + case ARM_SSBD_FORCE_ENABLE: + case ARM_SSBD_MITIGATED: + ret = ARM_SMCCC_NOT_REQUIRED; + break; + } + break; } set_user_reg(regs, 0, ret); @@ -114,6 +132,25 @@ static bool handle_arch(struct cpu_user_regs *regs) case ARM_SMCCC_ARCH_WORKAROUND_1_FID: /* No return value */ return true; + + case ARM_SMCCC_ARCH_WORKAROUND_2_FID: + { + bool enable = (uint32_t)get_user_reg(regs, 1); + + /* + * ARM_WORKAROUND_2_FID should only be called when mitigation + * state can be changed at runtime. + */ + if ( unlikely(get_ssbd_state() != ARM_SSBD_RUNTIME) ) + return true; + + if ( enable ) + get_cpu_info()->flags |= CPUINFO_WORKAROUND_2_FLAG; + else + get_cpu_info()->flags &= ~CPUINFO_WORKAROUND_2_FLAG; + + return true; + } } return false; diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 9718ce37fb..05281d6af7 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -737,6 +737,7 @@ void restore_vcpu_affinity(struct domain *d) for_each_vcpu ( d, v ) { spinlock_t *lock; + unsigned int old_cpu = v->processor; ASSERT(!vcpu_runnable(v)); @@ -769,6 +770,9 @@ void restore_vcpu_affinity(struct domain *d) lock = vcpu_schedule_lock_irq(v); v->processor = SCHED_OP(vcpu_scheduler(v), pick_cpu, v); spin_unlock_irq(lock); + + if ( old_cpu != v->processor ) + sched_move_irqs(v); } domain_update_node_affinity(d); diff --git a/xen/drivers/video/Kconfig b/xen/drivers/video/Kconfig index 52e8ce6c15..41ca503cc9 100644 --- a/xen/drivers/video/Kconfig +++ b/xen/drivers/video/Kconfig @@ -11,6 +11,3 @@ config VGA Enable VGA output for the Xen hypervisor. If unsure, say Y. - -config HAS_ARM_HDLCD - bool diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile index 2bb91d62a5..2b3fc76812 100644 --- a/xen/drivers/video/Makefile +++ b/xen/drivers/video/Makefile @@ -4,4 +4,3 @@ obj-$(CONFIG_VIDEO) += font_8x16.o obj-$(CONFIG_VIDEO) += font_8x8.o obj-$(CONFIG_VIDEO) += lfb.o obj-$(CONFIG_VGA) += vesa.o -obj-$(CONFIG_HAS_ARM_HDLCD) += arm_hdlcd.o diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c deleted file mode 100644 index e1174b223f..0000000000 --- a/xen/drivers/video/arm_hdlcd.c +++ /dev/null @@ -1,281 +0,0 @@ -/* - * xen/drivers/video/arm_hdlcd.c - * - * Driver for ARM HDLCD Controller - * - * Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> - * Copyright (c) 2013 Citrix Systems. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - */ - -#include <asm/delay.h> -#include <asm/types.h> -#include <asm/platforms/vexpress.h> -#include <xen/device_tree.h> -#include <xen/libfdt/libfdt.h> -#include <xen/init.h> -#include <xen/mm.h> -#include "font.h" -#include "lfb.h" -#include "modelines.h" - -#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC)) - -#define HDLCD_INTMASK (0x18/4) -#define HDLCD_FBBASE (0x100/4) -#define HDLCD_LINELENGTH (0x104/4) -#define HDLCD_LINECOUNT (0x108/4) -#define HDLCD_LINEPITCH (0x10C/4) -#define HDLCD_BUS (0x110/4) -#define HDLCD_VSYNC (0x200/4) -#define HDLCD_VBACK (0x204/4) -#define HDLCD_VDATA (0x208/4) -#define HDLCD_VFRONT (0x20C/4) -#define HDLCD_HSYNC (0x210/4) -#define HDLCD_HBACK (0x214/4) -#define HDLCD_HDATA (0x218/4) -#define HDLCD_HFRONT (0x21C/4) -#define HDLCD_POLARITIES (0x220/4) -#define HDLCD_COMMAND (0x230/4) -#define HDLCD_PF (0x240/4) -#define HDLCD_RED (0x244/4) -#define HDLCD_GREEN (0x248/4) -#define HDLCD_BLUE (0x24C/4) - -struct color_masks { - int red_shift; - int red_size; - int green_shift; - int green_size; - int blue_shift; - int blue_size; -}; - -struct pixel_colors { - const char* bpp; - struct color_masks colors; -}; - -struct pixel_colors __initdata colors[] = { - { "16", { 0, 5, 11, 5, 6, 5 } }, - { "24", { 0, 8, 16, 8, 8, 8 } }, - { "32", { 0, 8, 16, 8, 8, 8 } }, -}; - -static void vga_noop_puts(const char *s) {} -void (*video_puts)(const char *) = vga_noop_puts; - -static void hdlcd_flush(void) -{ - dsb(sy); -} - -static int __init get_color_masks(const char* bpp, struct color_masks **masks) -{ - int i; - for ( i = 0; i < ARRAY_SIZE(colors); i++ ) - { - if ( !strncmp(colors[i].bpp, bpp, 2) ) - { - *masks = &colors[i].colors; - return 0; - } - } - return -1; -} - -static void __init set_pixclock(uint32_t pixclock) -{ - if ( dt_find_compatible_node(NULL, NULL, "arm,vexpress") ) - vexpress_syscfg(1, V2M_SYS_CFG_OSC_FUNC, - V2M_SYS_CFG_OSC5, &pixclock); -} - -void __init video_init(void) -{ - struct lfb_prop lfbp; - unsigned char *lfb; - paddr_t hdlcd_start, hdlcd_size; - paddr_t framebuffer_start, framebuffer_size; - const char *mode_string; - char _mode_string[16]; - int bytes_per_pixel = 4; - struct color_masks *c = NULL; - struct modeline *videomode = NULL; - int i; - const struct dt_device_node *dev; - const __be32 *cells; - u32 lenp; - int res; - - dev = dt_find_compatible_node(NULL, NULL, "arm,hdlcd"); - - if ( !dev ) - { - printk("HDLCD: Cannot find node compatible with \"arm,hdcld\"\n"); - return; - } - - res = dt_device_get_address(dev, 0, &hdlcd_start, &hdlcd_size); - if ( !res ) - { - printk("HDLCD: Unable to retrieve MMIO base address\n"); - return; - } - - cells = dt_get_property(dev, "framebuffer", &lenp); - if ( !cells ) - { - printk("HDLCD: Unable to retrieve framebuffer property\n"); - return; - } - - framebuffer_start = dt_next_cell(dt_n_addr_cells(dev), &cells); - framebuffer_size = dt_next_cell(dt_n_size_cells(dev), &cells); - - if ( !hdlcd_start ) - { - printk(KERN_ERR "HDLCD: address missing from device tree, disabling driver\n"); - return; - } - - if ( !framebuffer_start ) - { - printk(KERN_ERR "HDLCD: framebuffer address missing from device tree, disabling driver\n"); - return; - } - - res = dt_property_read_string(dev, "mode", &mode_string); - if ( res ) - { - get_color_masks("32", &c); - memcpy(_mode_string, "1280x1024@60", strlen("1280x1024@60") + 1); - bytes_per_pixel = 4; - } - else if ( strlen(mode_string) < strlen("800x600@60") || - strlen(mode_string) > sizeof(_mode_string) - 1 ) - { - printk(KERN_ERR "HDLCD: invalid modeline=%s\n", mode_string); - return; - } else { - char *s = strchr(mode_string, '-'); - if ( !s ) - { - printk(KERN_INFO "HDLCD: bpp not found in modeline %s, assume 32 bpp\n", - mode_string); - get_color_masks("32", &c); - memcpy(_mode_string, mode_string, strlen(mode_string) + 1); - bytes_per_pixel = 4; - } else { - if ( strlen(s) < 6 ) - { - printk(KERN_ERR "HDLCD: invalid mode %s\n", mode_string); - return; - } - s++; - if ( get_color_masks(s, &c) < 0 ) - { - printk(KERN_WARNING "HDLCD: unsupported bpp %s\n", s); - return; - } - bytes_per_pixel = simple_strtoll(s, NULL, 10) / 8; - } - i = s - mode_string - 1; - memcpy(_mode_string, mode_string, i); - memcpy(_mode_string + i, mode_string + i + 3, 4); - } - - for ( i = 0; i < ARRAY_SIZE(videomodes); i++ ) { - if ( !strcmp(_mode_string, videomodes[i].mode) ) - { - videomode = &videomodes[i]; - break; - } - } - if ( !videomode ) - { - printk(KERN_WARNING "HDLCD: unsupported videomode %s\n", - _mode_string); - return; - } - - if ( framebuffer_size < bytes_per_pixel * videomode->xres * videomode->yres ) - { - printk(KERN_ERR "HDLCD: the framebuffer is too small, disabling the HDLCD driver\n"); - return; - } - - printk(KERN_INFO "Initializing HDLCD driver\n"); - - lfb = ioremap_wc(framebuffer_start, framebuffer_size); - if ( !lfb ) - { - printk(KERN_ERR "Couldn't map the framebuffer\n"); - return; - } - memset(lfb, 0x00, bytes_per_pixel * videomode->xres * videomode->yres); - - /* uses FIXMAP_MISC */ - set_pixclock(videomode->pixclock); - - set_fixmap(FIXMAP_MISC, maddr_to_mfn(hdlcd_start), PAGE_HYPERVISOR_NOCACHE); - HDLCD[HDLCD_COMMAND] = 0; - - HDLCD[HDLCD_LINELENGTH] = videomode->xres * bytes_per_pixel; - HDLCD[HDLCD_LINECOUNT] = videomode->yres - 1; - HDLCD[HDLCD_LINEPITCH] = videomode->xres * bytes_per_pixel; - HDLCD[HDLCD_PF] = ((bytes_per_pixel - 1) << 3); - HDLCD[HDLCD_INTMASK] = 0; - HDLCD[HDLCD_FBBASE] = framebuffer_start; - HDLCD[HDLCD_BUS] = 0xf00 | (1 << 4); - HDLCD[HDLCD_VBACK] = videomode->vback - 1; - HDLCD[HDLCD_VSYNC] = videomode->vsync - 1; - HDLCD[HDLCD_VDATA] = videomode->yres - 1; - HDLCD[HDLCD_VFRONT] = videomode->vfront - 1; - HDLCD[HDLCD_HBACK] = videomode->hback - 1; - HDLCD[HDLCD_HSYNC] = videomode->hsync - 1; - HDLCD[HDLCD_HDATA] = videomode->xres - 1; - HDLCD[HDLCD_HFRONT] = videomode->hfront - 1; - HDLCD[HDLCD_POLARITIES] = (1 << 2) | (1 << 3); - HDLCD[HDLCD_RED] = (c->red_size << 8) | c->red_shift; - HDLCD[HDLCD_GREEN] = (c->green_size << 8) | c->green_shift; - HDLCD[HDLCD_BLUE] = (c->blue_size << 8) | c->blue_shift; - HDLCD[HDLCD_COMMAND] = 1; - clear_fixmap(FIXMAP_MISC); - - lfbp.pixel_on = (((1 << c->red_size) - 1) << c->red_shift) | - (((1 << c->green_size) - 1) << c->green_shift) | - (((1 << c->blue_size) - 1) << c->blue_shift); - lfbp.lfb = lfb; - lfbp.font = &font_vga_8x16; - lfbp.bits_per_pixel = bytes_per_pixel*8; - lfbp.bytes_per_line = bytes_per_pixel*videomode->xres; - lfbp.width = videomode->xres; - lfbp.height = videomode->yres; - lfbp.flush = hdlcd_flush; - lfbp.text_columns = videomode->xres / 8; - lfbp.text_rows = videomode->yres / 16; - if ( lfb_init(&lfbp) < 0 ) - return; - video_puts = lfb_scroll_puts; -} - -void __init video_endboot(void) { } - -/* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 4 - * indent-tabs-mode: nil - * End: - */ diff --git a/xen/include/asm-arm/alternative.h b/xen/include/asm-arm/alternative.h index 4e33d1cdf7..9b4b02811b 100644 --- a/xen/include/asm-arm/alternative.h +++ b/xen/include/asm-arm/alternative.h @@ -3,6 +3,8 @@ #include <asm/cpufeature.h> +#define ARM_CB_PATCH ARM_NCAPS + #ifndef __ASSEMBLY__ #include <xen/init.h> @@ -18,16 +20,24 @@ struct alt_instr { }; /* Xen: helpers used by common code. */ -#define __ALT_PTR(a,f) ((u32 *)((void *)&(a)->f + (a)->f)) +#define __ALT_PTR(a,f) ((void *)&(a)->f + (a)->f) #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) #define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset) +typedef void (*alternative_cb_t)(const struct alt_instr *alt, + const uint32_t *origptr, uint32_t *updptr, + int nr_inst); + void __init apply_alternatives_all(void); int apply_alternatives(const struct alt_instr *start, const struct alt_instr *end); -#define ALTINSTR_ENTRY(feature) \ +#define ALTINSTR_ENTRY(feature, cb) \ " .word 661b - .\n" /* label */ \ + " .if " __stringify(cb) " == 0\n" \ " .word 663f - .\n" /* new instruction */ \ + " .else\n" \ + " .word " __stringify(cb) "- .\n" /* callback */ \ + " .endif\n" \ " .hword " __stringify(feature) "\n" /* feature bit */ \ " .byte 662b-661b\n" /* source len */ \ " .byte 664f-663f\n" /* replacement len */ @@ -45,15 +55,18 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en * but most assemblers die if insn1 or insn2 have a .inst. This should * be fixed in a binutils release posterior to 2.25.51.0.2 (anything * containing commit 4e4d08cf7399b606 or c1baaddf8861). + * + * Alternatives with callbacks do not generate replacement instructions. */ -#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled) \ +#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled, cb) \ ".if "__stringify(cfg_enabled)" == 1\n" \ "661:\n\t" \ oldinstr "\n" \ "662:\n" \ ".pushsection .altinstructions,\"a\"\n" \ - ALTINSTR_ENTRY(feature) \ + ALTINSTR_ENTRY(feature,cb) \ ".popsection\n" \ + " .if " __stringify(cb) " == 0\n" \ ".pushsection .altinstr_replacement, \"a\"\n" \ "663:\n\t" \ newinstr "\n" \ @@ -61,11 +74,17 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en ".popsection\n\t" \ ".org . - (664b-663b) + (662b-661b)\n\t" \ ".org . - (662b-661b) + (664b-663b)\n" \ + ".else\n\t" \ + "663:\n\t" \ + "664:\n\t" \ + ".endif\n" \ ".endif\n" #define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...) \ - __ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg)) + __ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg), 0) +#define ALTERNATIVE_CB(oldinstr, cb) \ + __ALTERNATIVE_CFG(oldinstr, "NOT_AN_INSTRUCTION", ARM_CB_PATCH, 1, cb) #else #include <asm/asm_defns.h> @@ -126,6 +145,14 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en 663: .endm +.macro alternative_cb cb + .set .Lasm_alt_mode, 0 + .pushsection .altinstructions, "a" + altinstruction_entry 661f, \cb, ARM_CB_PATCH, 662f-661f, 0 + .popsection +661: +.endm + /* * Complete an alternative code sequence. */ @@ -135,6 +162,13 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en .org . - (662b-661b) + (664b-663b) .endm +/* + * Callback-based alternative epilogue + */ +.macro alternative_cb_end +662: +.endm + #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \ alternative_insn insn1, insn2, cap, IS_ENABLED(cfg) diff --git a/xen/include/asm-arm/arm64/macros.h b/xen/include/asm-arm/arm64/macros.h new file mode 100644 index 0000000000..9c5e676b37 --- /dev/null +++ b/xen/include/asm-arm/arm64/macros.h @@ -0,0 +1,25 @@ +#ifndef __ASM_ARM_ARM64_MACROS_H +#define __ASM_ARM_ARM64_MACROS_H + + /* + * @dst: Result of get_cpu_info() + */ + .macro adr_cpu_info, dst + add \dst, sp, #STACK_SIZE + and \dst, \dst, #~(STACK_SIZE - 1) + sub \dst, \dst, #CPUINFO_sizeof + .endm + + /* + * @dst: Result of READ_ONCE(per_cpu(sym, smp_processor_id())) + * @sym: The name of the per-cpu variable + * @tmp: scratch register + */ + .macro ldr_this_cpu, dst, sym, tmp + ldr \dst, =per_cpu__\sym + mrs \tmp, tpidr_el2 + ldr \dst, [\dst, \tmp] + .endm + +#endif /* __ASM_ARM_ARM64_MACROS_H */ + diff --git a/xen/include/asm-arm/cpuerrata.h b/xen/include/asm-arm/cpuerrata.h index 4e45b237c8..55ddfda272 100644 --- a/xen/include/asm-arm/cpuerrata.h +++ b/xen/include/asm-arm/cpuerrata.h @@ -27,9 +27,51 @@ static inline bool check_workaround_##erratum(void) \ CHECK_WORKAROUND_HELPER(766422, ARM32_WORKAROUND_766422, CONFIG_ARM_32) CHECK_WORKAROUND_HELPER(834220, ARM64_WORKAROUND_834220, CONFIG_ARM_64) +CHECK_WORKAROUND_HELPER(ssbd, ARM_SSBD, CONFIG_ARM_SSBD) #undef CHECK_WORKAROUND_HELPER +enum ssbd_state +{ + ARM_SSBD_UNKNOWN, + ARM_SSBD_FORCE_DISABLE, + ARM_SSBD_RUNTIME, + ARM_SSBD_FORCE_ENABLE, + ARM_SSBD_MITIGATED, +}; + +#ifdef CONFIG_ARM_SSBD + +#include <asm/current.h> + +extern enum ssbd_state ssbd_state; + +static inline enum ssbd_state get_ssbd_state(void) +{ + return ssbd_state; +} + +DECLARE_PER_CPU(register_t, ssbd_callback_required); + +static inline bool cpu_require_ssbd_mitigation(void) +{ + return this_cpu(ssbd_callback_required); +} + +#else + +static inline bool cpu_require_ssbd_mitigation(void) +{ + return false; +} + +static inline enum ssbd_state get_ssbd_state(void) +{ + return ARM_SSBD_UNKNOWN; +} + +#endif + #endif /* __ARM_CPUERRATA_H__ */ /* * Local variables: diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h index e557a095af..3de6b54301 100644 --- a/xen/include/asm-arm/cpufeature.h +++ b/xen/include/asm-arm/cpufeature.h @@ -43,8 +43,9 @@ #define SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT 5 #define SKIP_CTXT_SWITCH_SERROR_SYNC 6 #define ARM_HARDEN_BRANCH_PREDICTOR 7 +#define ARM_SSBD 8 -#define ARM_NCAPS 8 +#define ARM_NCAPS 9 #ifndef __ASSEMBLY__ @@ -88,6 +89,7 @@ void update_cpu_capabilities(const struct arm_cpu_capabilities *caps, const char *info); void enable_cpu_capabilities(const struct arm_cpu_capabilities *caps); +int enable_nonboot_cpu_caps(const struct arm_cpu_capabilities *caps); #endif /* __ASSEMBLY__ */ diff --git a/xen/include/asm-arm/current.h b/xen/include/asm-arm/current.h index 7a0971fdea..f9819b34fc 100644 --- a/xen/include/asm-arm/current.h +++ b/xen/include/asm-arm/current.h @@ -7,6 +7,10 @@ #include <asm/percpu.h> #include <asm/processor.h> +/* Tell whether the guest vCPU enabled Workaround 2 (i.e variant 4) */ +#define CPUINFO_WORKAROUND_2_FLAG_SHIFT 0 +#define CPUINFO_WORKAROUND_2_FLAG (_AC(1, U) << CPUINFO_WORKAROUND_2_FLAG_SHIFT) + #ifndef __ASSEMBLY__ struct vcpu; @@ -21,7 +25,7 @@ DECLARE_PER_CPU(struct vcpu *, curr_vcpu); struct cpu_info { struct cpu_user_regs guest_cpu_user_regs; unsigned long elr; - unsigned int pad; + uint32_t flags; }; static inline struct cpu_info *get_cpu_info(void) diff --git a/xen/include/asm-arm/macros.h b/xen/include/asm-arm/macros.h index 5d837cb38b..1d4bb41d15 100644 --- a/xen/include/asm-arm/macros.h +++ b/xen/include/asm-arm/macros.h @@ -8,7 +8,7 @@ #if defined (CONFIG_ARM_32) # include <asm/arm32/macros.h> #elif defined(CONFIG_ARM_64) -/* No specific ARM64 macros for now */ +# include <asm/arm64/macros.h> #else # error "unknown ARM variant" #endif diff --git a/xen/include/asm-arm/platforms/vexpress.h b/xen/include/asm-arm/platforms/vexpress.h index 5cf3aba6f2..8b45d3a850 100644 --- a/xen/include/asm-arm/platforms/vexpress.h +++ b/xen/include/asm-arm/platforms/vexpress.h @@ -26,12 +26,6 @@ /* Board-specific: base address of system controller */ #define SP810_ADDRESS 0x1C020000 -#ifndef __ASSEMBLY__ -#include <xen/inttypes.h> - -int vexpress_syscfg(int write, int function, int device, uint32_t *data); -#endif - #endif /* __ASM_ARM_PLATFORMS_VEXPRESS_H */ /* * Local variables: diff --git a/xen/include/asm-arm/procinfo.h b/xen/include/asm-arm/procinfo.h index 26306b35f8..02be56e348 100644 --- a/xen/include/asm-arm/procinfo.h +++ b/xen/include/asm-arm/procinfo.h @@ -35,9 +35,9 @@ struct proc_info_list { struct processor *processor; }; -const __init struct proc_info_list *lookup_processor_type(void); +const struct proc_info_list *lookup_processor_type(void); -void __init processor_setup(void); +void processor_setup(void); void processor_vcpu_initialise(struct vcpu *v); #endif diff --git a/xen/include/asm-arm/psci.h b/xen/include/asm-arm/psci.h index 9ac820e94a..832f77afff 100644 --- a/xen/include/asm-arm/psci.h +++ b/xen/include/asm-arm/psci.h @@ -20,6 +20,7 @@ extern uint32_t psci_ver; int psci_init(void); int call_psci_cpu_on(int cpu); +void call_psci_cpu_off(void); void call_psci_system_off(void); void call_psci_system_reset(void); diff --git a/xen/include/asm-arm/smccc.h b/xen/include/asm-arm/smccc.h index 8342cc33fe..74c13f8419 100644 --- a/xen/include/asm-arm/smccc.h +++ b/xen/include/asm-arm/smccc.h @@ -254,11 +254,18 @@ struct arm_smccc_res { #define ARM_SMCCC_ARCH_WORKAROUND_1_FID \ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ - ARM_SMCCC_CONV_32, \ - ARM_SMCCC_OWNER_ARCH, \ - 0x8000) + ARM_SMCCC_CONV_32, \ + ARM_SMCCC_OWNER_ARCH, \ + 0x8000) + +#define ARM_SMCCC_ARCH_WORKAROUND_2_FID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_CONV_32, \ + ARM_SMCCC_OWNER_ARCH, \ + 0x7FFF) /* SMCCC error codes */ +#define ARM_SMCCC_NOT_REQUIRED (-2) #define ARM_SMCCC_ERR_UNKNOWN_FUNCTION (-1) #define ARM_SMCCC_NOT_SUPPORTED (-1) #define ARM_SMCCC_SUCCESS (0) diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h index a0e5e92ebb..70b52d1d16 100644 --- a/xen/include/asm-arm/traps.h +++ b/xen/include/asm-arm/traps.h @@ -27,6 +27,10 @@ void handle_wo_wi(struct cpu_user_regs *regs, int regidx, bool read, void handle_ro_raz(struct cpu_user_regs *regs, int regidx, bool read, const union hsr hsr, int min_el); +/* Read only as value provided with 'val' argument */ +void handle_ro_read_val(struct cpu_user_regs *regs, int regidx, bool read, + const union hsr hsr, int min_el, register_t val); + /* Co-processor registers emulation (see arch/arm/vcpreg.c). */ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr); void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr); -- generated by git-patchbot for /home/xen/git/xen.git#master _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |