[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 4/4] x86/ioreq server: Reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.
On 9/9/2016 4:20 PM, Jan Beulich wrote: On 09.09.16 at 09:24, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:On 9/9/2016 1:26 PM, Yu Zhang wrote:On 02.09.16 at 12:47, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:@@ -965,7 +968,8 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m, if ( is_epte_valid(ept_entry) ) { if ( (recalc || ept_entry->recalc) && - p2m_is_changeable(ept_entry->sa_p2mt) ) + p2m_is_changeable(ept_entry->sa_p2mt) && + (ept_entry->sa_p2mt != p2m_ioreq_server) ) *t = p2m_is_logdirty_range(p2m, gfn, gfn) ?p2m_ram_logdirty: p2m_ram_rw; elseConsidering this (and at least one more similar adjustment further down), is it really appropriate to include p2m_ioreq_server in the set of "changeable" types?Well, I agree p2m_ioreq_server do have different behaviors than the p2m_log_dirty, but removing p2m_ioreq_server from changeable type would need more specific code for the p2m_ioreq_server in routines like resolve_misconfig(), do_recalc() and p2m_change_entry_type_global() etc. I've tried this approach and abandoned later. :)I can see that the other option might require more adjustments, in which case I guess this variant would be fine if you created another helper (well named) inline function instead of open coding this in several places. Of course such dissimilar handling in the various places p2m_is_changeable() gets used right now will additionally require good reasoning - after all that predicate exists to express the similarities between different code paths. Thanks Jan.Well, for the logic of p2m type recalculation, similarities between p2m_ioreq_server and other changeable types exceeds their differences. As to the special cases, how about we use a macro, i.e. p2m_is_ioreq? Jan Yu _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |