[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1] Fix p2m_set_suppress_ve
On Thu, Apr 4, 2019 at 8:36 AM Jan Beulich <JBeulich@xxxxxxxx> wrote: > > >>> On 04.04.19 at 15:09, <tamas@xxxxxxxxxxxxx> wrote: > > I agree that it is confusing. It would be fine to UNSHARE here as well > > to keep things consistent but otherwise it's not really an issue as > > the entry type is checked later to ensure that this is a p2m_ram_rw > > entry. We are simply trying to keep mem_sharing and _modified_ altp2m > > entries exclusive. So it is fine to have mem_shared entries in the > > hostp2m and have those entries be copied into altp2m tables lazily, > > but for altp2m entries that have changed mem_access permissions or are > > remapped we want the entries in the hostp2m to be of regular type. > > This is not necessarily a technical requirement, it's mostly just to > > reduce complexity. So it would be fine to add UNSHARE here as well, I > > guess the only reason why I haven't done that is because I already > > trigger the unshare and copy-to-altp2m before remapping by setting > > dummy mem_access permission on the entries. > > I'm afraid I don't agree with this justification: mem-sharing is about > contents of pages, That's incorrect. Mem sharing doesn't care about the contents of pages at all. whereas altp2m is about meta data (permissions > etc). I don't see why one would want to unshare because of a meta > data adjustment other than a page becoming non-CoW-writable. > Eagerly un-sharing in the end undermines the intentions of sharing. We are unsharing to keep altp2m and mem_sharing compatible but mutually exclusive. Even if technically they could co-exist, last time I worked on this we came to this agreement on the mailinglist as to reduce complexity and to make reviewing easier. Tamas _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |