[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 2/2] gnttab: refactor locking for better scalability

>>> On 12.11.13 at 09:07, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
> On 12/11/2013 07:18, "Matt Wilson" <msw@xxxxxxxxx> wrote:
>>> Is there any concern about writer starvation here? I know our spinlocks
>>> aren't 'fair' but our rwlocks are guaranteed to starve out writers if there
>>> is a steady continuous stream of readers. Perhaps we should write-bias our
>>> rwlock, or at least make that an option. We could get fancier but would
>>> probably hurt performance.
>> Yes, I'm a little concerned about writer starvation. But so far even
>> in the presence of very frequent readers it seems like the infrequent
>> writers are able to get the lock when they need to. However, I've not
>> tested the iommu=strict path yet. I'm thinking that in that case
>> there's just going to be frequent writers, so there's less risk of
>> readers starving writers. For what it's worth, when mapcount() gets in
>> the picture with persistent grants, I'd expect to see some pretty
>> significant performance degradation for map/unmap operations. This was
>> also observed in [1] under different circumstances.
> The average case isn't the only concern here, but also the worst case, which
> could maybe tie up a CPU for unbounded time. Could a malicious guest set up
> such a workload? I'm just thinking we don't want to end up with a DoS XSA on
> this down the line. :)

And indeed I think we should be making our rwlocks fair for writers
before pushing in the change here; I've been meaning to get to this
for a while, but other stuff continues to require attention. I'm also
of the opinion that we should switch to ticket spinlocks.

But of course, fairness for writers means that performance may
drop again on the read paths, unless the write lock use is strictly
limited to code paths not (normally) involved in I/O.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.