[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v6 6/9] xen/riscv: introduce functionality to work with CPU info
On 11.09.2024 14:05, oleksii.kurochko@xxxxxxxxx wrote: > On Tue, 2024-09-10 at 12:33 +0200, Jan Beulich wrote: >> On 02.09.2024 19:01, Oleksii Kurochko wrote: >>> @@ -72,6 +77,16 @@ FUNC(reset_stack) >>> ret >>> END(reset_stack) >>> >>> +/* void setup_tp(unsigned int xen_cpuid); */ >>> +FUNC(setup_tp) >>> + la tp, pcpu_info >>> + li t0, PCPU_INFO_SIZE >>> + mul t1, a0, t0 >>> + add tp, tp, t1 >>> + >>> + ret >>> +END(setup_tp) >> >> I take it this is going to run (i.e. also for secondary CPUs) ahead >> of >> Xen being able to handle any kind of exception (on the given CPU)? > Yes, I am using it for secondary CPUs and Xen are handling exceptions ( > on the given CPU ) fine. Yet that wasn't my question. Note in particular the use of "ahead of". >> If >> so, all is fine here. If not, transiently pointing tp at CPU0's space >> is a possible problem. > I haven't had any problem with that at the moment. > > Do you think that it will be better to use DECLARE_PER_CPU() with > updating of setup_tp() instead of pcpu_info[] when SMP will be > introduced? > What kind of problems should I take into account? If exceptions can be handled by Xen already when entering this function, then the exception handler would need to be setting up tp for itself. If not, it would use whatever the interrupted context used (or what is brought into context by hardware while delivering the exception). If I assumed that tp in principle doesn't need setting up when handling exceptions (sorry, haven't read up enough yet about how guest -> host switches work for RISC-V), and if further exceptions can already be handled upon entering setup_tp(), then keeping tp properly invalid until it can be set to its correct value will make it easier to diagnose problems than when - like you do - transiently setting tp to CPU0's value (and hence risking corruption of its state). Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |