[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock

On 14.10.20 08:52, Jan Beulich wrote:
On 14.10.2020 08:00, Jürgen Groß wrote:
On 13.10.20 17:28, Jan Beulich wrote:
On 12.10.2020 11:27, Juergen Gross wrote:
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,45 @@ void notify_via_xen_event_channel(struct domain *ld, int 
   #define bucket_from_port(d, p) \
       ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])

Isn't the ceiling on simultaneous readers the number of pCPU-s,
and the value here then needs to be NR_CPUS + 1 to accommodate
the maximum number of readers? Furthermore, with you dropping
the disabling of interrupts, one pCPU can acquire a read lock
now more than once, when interrupting a locked region.

Yes, I think you are right.

So at least 2 * (NR-CPUS + 1), or even 3 * (NR_CPUS + 1) for covering
NMIs, too?

Hard to say: Even interrupts can in principle nest. I'd go further
and use e.g. INT_MAX / 4, albeit no matter what value we choose
there'll remain a theoretical risk. I'm therefore not fully
convinced of the concept, irrespective of it providing an elegant
solution to the problem at hand. I'd be curious what others think.

I just realized I should add a sanity test in evtchn_write_lock() to
exclude the case of multiple writers (this should never happen due to
all writers locking d->event_lock).

This in turn means we can set EVENT_WRITE_LOCK_INC to INT_MIN and use
negative lock values for a write-locked event channel.

Hitting this limit seems to require quite high values of NR_CPUS, even
with interrupts nesting (I'm quite sure we'll run out of stack space
way before this limit can be hit even with 16 million cpus).




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.