[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Use of newer locking primitives...



On 30/08/2022 13:00, Martin Harvey wrote:
Well now you mention it, I've seen a cpl of amusing cases where 
TransmitterRingPollDpc takes 4ms to complete, which leads to Tx jitter. I can 
see two potential issues:

- The use of the Tx ring lock as both a lock and a queue. If we undo the "roll our 
own" bit on this and use a plain lock and a plain queue, we can then use the faster 
sync privitives as tested, debugged, tuned, and tweaked by Microsoft.


I wouldn't necessarily be convinced that moving away from the atomic lock/queue would be any faster. Limiting the queue drain may help from a fairness PoV, which I guess could be achieved by actually blocking on the ring lock if the queue gets to a certain length.

- We do a XenBus GntTabRevoke for every single damn fragment individually on 
the Tx fast path. There are probably some ways of batching / deferring these so 
as to make everything run a bit more smoothly.

A revoke ought to be fairly cheap; it's essentially some bit flips. It does, of course, involve some memory barriering... but is that really slowing things down?

  Paul



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.