[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen/vcpu: Sanitise VCPUOP_initialise call hierachy
Hi, On 02/12/2019 16:17, Andrew Cooper wrote: On 15/11/2019 15:24, Julien Grall wrote:On Fri, 15 Nov 2019, 18:13 Andrew Cooper, <andrew.cooper3@xxxxxxxxxx <mailto:andrew.cooper3@xxxxxxxxxx>> wrote:On 31/10/2019 21:25, Julien Grall wrote: > Hi, > > On 31/10/2019 19:28, Andrew Cooper wrote: >> This code is especially tangled. VCPUOP_initialise calls into >> arch_initialise_vcpu() which calls back into default_initialise_vcpu() which >> is common code. >> >> This path is actually dead code on ARM, because VCPUOP_initialise is filtered >> out by do_arm_vcpu_op(). >> >> The only valid way to start a secondary CPU on ARM is via the PSCI interface. >> The same could in principle be said about INIT-SIPI-SIPI for x86 HVM, if HVM >> guests hadn't already interited a paravirt way of starting CPUs. >> >> Either way, it is quite likely that no future architectures implemented in Xen >> are going to want to use a PV interface, as some standardised (v)CPU bringup >> mechanism will already exist. > I am not sure I agree here. Looking at Linux RISCv code (see [1] and > [2]), it looks like the kernel has to deal with selecting one "lucky" > CPU/hart to deal with the boot and park all the others. > > So it looks like to me there are nothing at the moment on RISCv to do > (v)CPU bring-up. We might be able to use PSCI (although this is an ARM > specific way), but would rather wait and see what RISCv folks come up > with before deciding PV is never going to be used. Nothing here prohibits other architectures from using a PV interface if they wish.Well, your commit message and the code movement implies that nobody will ever use it.However, your examples prove my point. There is an already-agreed way to start RISCv CPUs which is not a PV interface, and therefore is very unlikely to adopted to run differently under Xen.I would not call that a way to start CPUs because AFAICT all CPUs have to be brought up together and you can't offline them. This is fairly restrictive for a guest so I don't think reusing it would sustainable long term.FWIW, this is exactly what Arm used to have before PSCI.This reply is not helpful with progressing the patch.I'm not arguing whether the current RISCV behaviour is great or not. It is what it is.The question at hand is: In some theoretical future where Xen gains RISCV support, how likely are the Linux RISCV maintainers to take a Xen specific paravirt startup sequence which does things differently to the existing sequence which is hypervisor agnostic?The answer is tantamount to 0, because what does it actually gain you? An extra boot protocol to support, which is hypervisor specific, with no added functionality over the existing hypervisor-neutral one. RISCv will probably have to come-up with a new protocol that will allow to offline/online a CPU. If they don't agree on any, then they will have to face every hypervisor/platform to invent their own. As I don't have any insight on RISCv, I can't really predict whether they will repeat the arm 32-bit story. I still don't see any convincing argument to suggest that future architectures may choose to use a Xen specific paravirt start mechanism, but as already stated, this patch doesn't rule such an interface out. Leaving aside the argument regarding whether a newer architecture would use them, it feels slightly odd to suggest the protocol will not be used by other platform but then you only move out VCPUOP_initialize. VCPUOP_{up, down} are still present. If we really consider that a new arch will come up with its own protocol, then we should remove all the hypercalls so we don't end up in an half state support. In this case, I would just prefer if we introduce a Kconfig that will cover at least VCPUOP_{up, down, initialize}. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |