[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/9] gnttab: defer allocation of maptrack frames table



Hi Jan,

On 06/09/2021 14:29, Jan Beulich wrote:
On 06.09.2021 15:13, Julien Grall wrote:
On 26/08/2021 11:09, Jan Beulich wrote:
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -633,6 +633,34 @@ get_maptrack_handle(
       if ( likely(handle != INVALID_MAPTRACK_HANDLE) )
           return handle;
+ if ( unlikely(!read_atomic(&lgt->maptrack)) )
+    {
+        struct grant_mapping **maptrack = NULL;
+
+        if ( lgt->max_maptrack_frames )
+            maptrack = vzalloc(lgt->max_maptrack_frames * sizeof(*maptrack));

While I understand that allocating with a lock is bad idea, I don't like
the fact that multiple vCPUs racing each other would result to
temporarily allocate more memory.

If moving the call within the lock is not a solution, would using a loop
with a cmpxchg() a solution to block the other vCPU?

As with any such loop the question then is for how long to retry. No matter
what (static) loop bound you choose, if you exceed it you would return an
error to the caller for no reason.

I was thinking to have an unbound loop. This would be no better no worth than the current implementation because of the existing lock.


As to the value to store by cmpxchg() - were you thinking of some "alloc in
progress" marker?

Yes.

You obviously can't store the result of the allocation
before actually doing the allocation, yet it is unnecessary allocations
that you want to avoid (i.e. to achieve your goal the allocation would need
to come after the cmpxchg()). A marker would further complicate the other
code here, even if (maybe) just slightly ...

Right, the code will be a bit more complicated (although it would not be that bad if moved in a separate function...) but I feel it is better than the multiple vzalloc().

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.