[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v6 6/9] xen/riscv: introduce functionality to work with CPU info
On 02.09.2024 19:01, Oleksii Kurochko wrote: > --- a/xen/arch/riscv/include/asm/processor.h > +++ b/xen/arch/riscv/include/asm/processor.h > @@ -12,8 +12,31 @@ > > #ifndef __ASSEMBLY__ > > -/* TODO: need to be implemeted */ > -#define smp_processor_id() 0 > +#include <xen/bug.h> > + > +register struct pcpu_info *tp asm ( "tp" ); > + > +struct pcpu_info { > + unsigned int processor_id; /* Xen CPU id */ > + unsigned long hart_id; /* physical CPU id */ > +} __cacheline_aligned; Shouldn't you include xen/cache.h for this, to be sure the header can be included on its own? I'm also unconvinced of this placement: Both Arm and x86 have similar structures (afaict), living in current.h. > +/* tp points to one of these */ > +extern struct pcpu_info pcpu_info[NR_CPUS]; > + > +#define get_processor_id() (tp->processor_id) Iirc it was in response to one of your earlier patches that we removed get_processor_id() from the other architectures, as being fully redundant with smp_processor_id(). Is there a particular reason you re-introduce that now for RISC-V? > +#define set_processor_id(id) do { \ > + tp->processor_id = (id); \ > +} while (0) > + > +static inline unsigned int smp_processor_id(void) > +{ > + unsigned int id = get_processor_id(); > + > + BUG_ON(id > (NR_CPUS - 1)); The more conventional way of expressing this is >= NR_CPUS. > @@ -14,6 +16,13 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask); > */ > #define park_offline_cpus false > > +/* > + * Mapping between linux logical cpu index and hartid. > + */ > +#define cpuid_to_hartid(cpu) (pcpu_info[cpu].hart_id) Does this need to be a macro (rather than an inline function)? > @@ -72,6 +77,16 @@ FUNC(reset_stack) > ret > END(reset_stack) > > +/* void setup_tp(unsigned int xen_cpuid); */ > +FUNC(setup_tp) > + la tp, pcpu_info > + li t0, PCPU_INFO_SIZE > + mul t1, a0, t0 > + add tp, tp, t1 > + > + ret > +END(setup_tp) I take it this is going to run (i.e. also for secondary CPUs) ahead of Xen being able to handle any kind of exception (on the given CPU)? If so, all is fine here. If not, transiently pointing tp at CPU0's space is a possible problem. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |