[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: vcpu_show_execution_state() difference between Arm and x86

On 01/09/2021 14:39, Jan Beulich wrote:

back in 2016 Andrew added code to x86'es variant to avoid interleaving
of output. The same issue ought to exist on Arm.

Agree. I guess we got away so far because it is pretty rare to have two CPUs running at the same.

The lock acquired,
or more importantly the turning off of IRQs while doing so, is now
getting in the way of having PVH Dom0's state dumped the 2nd time.

I am not quite too sure to understand the problem with PVH dom0. Do you have a pointer to the issue?

register state I did find a sufficiently simple (yet not pretty)
workaround. For the stack, where I can't reasonably avoid using p2m
functions, this is going to be more difficult. >
Since I expect Arm to want to also have interleave protection at some
point, and since Arm also acquires the p2m lock while accessing Dom0's
stacks, I wonder whether anyone has any clever idea on how to avoid
the (valid) triggering of check_lock()'s assertion without intrusive
changes. (As to intrusive changes - acquiring the p2m lock up front in
recursive mode, plus silencing check_lock() for nested acquires of a
lock that's already being held by a CPU was my initial idea.)

At least one Arm, the P2M lock is a rwlock which is not yet recursive. But then it feels to me that this solution is only going to cause us more trouble in the future.

I looked at the original commit to find out the reason to use the console lock. AFAICT, this was to allow console_force_unlock() to work properly. But it is not entirely clear why we couldn't get a new lock (with IRQ enabled) that could be forced unlocked in that function.

Can either you or Andrew clarify it?

The other solution I can think off is buffering the output for show_registers and only print it once at the end. The downside is we may not get any output if there is an issue in the middle of the dump.


Julien Grall



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.