[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/domain_page: implement pure per-vCPU mapping infrastructure



On 21.02.2020 15:52, Xia, Hongyan wrote:
> On Fri, 2020-02-21 at 14:31 +0100, Jan Beulich wrote:
>> On 21.02.2020 13:52, Xia, Hongyan wrote:
>>> On Fri, 2020-02-21 at 11:50 +0000, Wei Liu wrote:
>>>> On Thu, Feb 06, 2020 at 06:58:23PM +0000, Hongyan Xia wrote:
>>>>> +    if ( hashmfn != mfn && !vcache->refcnt[idx] )
>>>>> +        __clear_bit(idx, vcache->inuse);
>>>>
>>>> Also, please flush the linear address here and the other
>>>> __clear_bit
>>>> location.
>>>
>>> I flush when a new entry is taking a slot. Yeah, it's probably
>>> better
>>> to flush earlier whenever a slot is no longer in use.
>>
>> Question is whether such individual flushes aren't actually
>> more overhead than a single flush covering all previously
>> torn down entries, done at suitable points (see the present
>> implementation).
> 
> There is certainly room for improvement. I am considering flushing
> entries in batches to reduce the overhead, e.g., in a similar way to
> the current implementation as you said.
> 
> I want to defer that to a separate patch because this is already a huge
> patch. From the benchmarks I have done so far, it does not look like
> this has any noticeable overhead and it already alleviates the lock
> contention, plus this is currently used only in a debug build, so I
> would like to defer the optimisation a bit.

This is certainly an acceptable approach. "Only used in debug builds"
isn't an overly helpful justification though, as (a) you aim to use
this in release builds and (b) on systems with enough RAM it already
is used in release builds. Plus (c) it is fairly simple to make
release builds also use it in all cases, by dropping the shortcut.

Along the lines of what I said in the other reply to Wei just a few
minutes ago - deviation from the present implementation assumptions
or guarantees needs at least calling out (and you may have done so,
I simply didn't get to look at the patch itself yet).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.