[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 2/2] xen: move TLB-flush filtering out into populate_physmap during vm creation
>>> On 19.09.16 at 04:50, <dongli.zhang@xxxxxxxxxx> wrote: > --- a/xen/common/domain.c > +++ b/xen/common/domain.c > @@ -1004,6 +1004,14 @@ int domain_unpause_by_systemcontroller(struct domain > *d) > { > int old, new, prev = d->controller_pause_count; > > + /* > + * We record this information here for populate_physmap to figure out > + * that the domain has finished being created. In fact, we're only > + * allowed to set the MEMF_no_tlbflush flag during VM creation. > + */ > + if ( unlikely(!d->creation_finished) ) > + d->creation_finished = true; Already on a much earlier version it was pointed out that the conditional here is rather pointless and potentially confusing. Please remove it unless you have a very good reason for it to be there. > @@ -150,6 +152,17 @@ static void populate_physmap(struct memop_args *a) > max_order(curr_d)) ) > return; > > + /* > + * With MEMF_no_tlbflush set, alloc_heap_pages() will ignore > + * TLB-flushes. After VM creation, this is a security issue (it can > + * make pages accessible to guest B, when guest A may still have a > + * cached mapping to them). So we only do this only during domain Duplicate "only". > --- a/xen/include/xen/sched.h > +++ b/xen/include/xen/sched.h > @@ -474,6 +474,12 @@ struct domain > unsigned int guest_request_enabled : 1; > unsigned int guest_request_sync : 1; > } monitor; > + > + /* > + * Set to true at the very end of domain creation, when the domain is > + * unpaused for the first time by the systemcontroller. > + */ > + bool creation_finished; Please place this next to the other group of booleans, the more that there is a 1 byte padding slot available there (or even 2 bytes when !CONFIG_HAS_PASSTHROUGH). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |