[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.





On 3/21/2017 9:49 PM, Paul Durrant wrote:
-----Original Message-----
[snip]
+        if ( (first_gfn > 0) || (data->flags == 0 && rc == 0) )
+        {
+            struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+            while ( read_atomic(&p2m->ioreq.entry_count) &&
+                    first_gfn <= p2m->max_mapped_pfn )
+            {
+                /* Iterate p2m table for 256 gfns each time. */
+                last_gfn = first_gfn + 0xff;
+
Might be worth a comment here to sat that p2m_finish_type_change()
limits last_gfn appropriately because it kind of looks wrong to be blindly
calling it with first_gfn + 0xff. Or perhaps, rather than passing last_gfn, 
pass a
'max_nr' parameter of 256 instead. Then you can drop last_gfn altogether. If
you prefer the parameters as they are then at least limit the scope of
last_gfn to this while loop.
Thanks for your comments, Paul. :)
Well, setting last_gfn with first_gfn+0xff looks a bit awkward. But why
using a 'max_nr' with a magic number, say 256, looks better? Or any
other benefits? :-)

Well, to my eyes calling it max_nr in the function would make it clear it's a 
limit rather than a definite count and then passing 256 in the call would make 
it clear that it is the chosen batch size.

Does that make sense?

Sounds reasonable. Thanks! :-)
Yu
   Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.