|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 qemu-trad] HVM: atomically access pointers in bufioreq handling
Jan Beulich writes ("[PATCH v2 qemu-trad] HVM: atomically access pointers in
bufioreq handling"):
> The number of slots per page being 511 (i.e. not a power of two) means
> that the (32-bit) read and write indexes going beyond 2^32 will likely
> disturb operation. The hypervisor side gets I/O req server creation
> extended so we can indicate that we're using suitable atomic accesses
> where needed, allowing it to atomically canonicalize both pointers when
> both have gone through at least one cycle.
>
> The Xen side counterpart (which is not a functional prereq to this
> change, albeit the intention is for Xen to assume default servers
> always use suitable atomic accesses) went in already (commit
> b7007bc6f9).
Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
I have managed to convince myself the consumer side code is correct.
(Repeating some stuff that was said on irc:)
However, IMO having both sides allowed to update the read ptr is very
hazard-prone and confusing.
I suggested doing a conventional new protocol version (dm checks HV
feature, passes new version if available, both sides then speak new
protcol), where the new protocol either has each side doing % on each
write of its own pointer, or alternatively simply dropping the off by
one oddity in the ring size.
Jan replied:
07:10 <jbeulich> Diziet, andyhhp: Having each side canonicalize its pointer
would break some of the comparisons of both pointers:
07:11 <jbeulich> Since readers would need to do modulo operations upon use,
buffer full and buffer empty would become indistinguishable.
I'm not wholly convinced by this but I don't think I really want to
argue. So I have applied the qemu-side patch to qemu-xen-traditional.
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |