|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v9 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.
On 3/22/2017 10:39 PM, Jan Beulich wrote: On 21.03.17 at 03:52, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:--- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -385,16 +385,51 @@ static int dm_op(domid_t domid,case XEN_DMOP_map_mem_type_to_ioreq_server: We have not discussed this. Our previous discussion is about the if condition before
calling hvm_map_mem_type_to_ioreq_server(). :-)
Maybe above code should be changed to:
@@ -400,11 +400,14 @@ static int dm_op(domid_t domid,
if ( first_gfn == 0 )
rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
data->type,
data->flags);
+ else
+ rc = 0;
+
/*
* Iterate p2m table when an ioreq server unmaps from
p2m_ioreq_server,
* and reset the remaining p2m_ioreq_server entries back to
p2m_ram_rw.
*/
- if ( (first_gfn > 0) || (data->flags == 0 && rc == 0) )
+ if ( data->flags == 0 && rc == 0 )
{
struct p2m_domain *p2m = p2m_get_hostp2m(d);
Sorry? I do not get it.Paul suggested we replace last_gfn with max_nr, which sounds reasonable to me. Guess you mean something else? Thanks Yu Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |