[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ongoing/future speculative mitigation work



On Fri, Oct 26, 2018 at 05:20:47AM -0600, Jan Beulich wrote:
> >>> On 26.10.18 at 12:51, <george.dunlap@xxxxxxxxxx> wrote:
> > On 10/26/2018 10:56 AM, Jan Beulich wrote:
> >>>>> On 26.10.18 at 11:28, <wei.liu2@xxxxxxxxxx> wrote:
> >>> On Fri, Oct 26, 2018 at 03:16:15AM -0600, Jan Beulich wrote:
> >>>>>>> On 25.10.18 at 18:29, <andrew.cooper3@xxxxxxxxxx> wrote:
> >>>>> A split xenheap model means that data pertaining to other guests isn't
> >>>>> mapped in the context of this vcpu, so cannot be brought into the cache.
> >>>>
> >>>> It was not clear to me from Wei's original mail that talk here is
> >>>> about "split" in a sense of "per-domain"; I was assuming the
> >>>> CONFIG_SEPARATE_XENHEAP mode instead.
> >>>
> >>> The split heap was indeed referring to CONFIG_SEPARATE_XENHEAP mode, yet
> >>> I what I wanted most is the partial direct map which reduces the amount
> >>> of data mapped inside Xen context -- the original idea was removing
> >>> direct map discussed during one of the calls IIRC. I thought making the
> >>> partial direct map mode work and make it as small as possible will get
> >>> us 90% there.
> >>>
> >>> The "per-domain" heap is a different work item.
> >> 
> >> But if we mean to go that route, going (back) to the separate
> >> Xen heap model seems just like an extra complication to me.
> >> Yet I agree that this would remove the need for a fair chunk of
> >> the direct map. Otoh a statically partitioned Xen heap would
> >> bring back scalability issues which we had specifically meant to
> >> get rid of by moving away from that model.
> > 
> > I think turning SEPARATE_XENHEAP back on would just be the first step.
> > We definitely would then need to sort things out so that it's scalable
> > again.
> > 
> > After system set-up, the key difference between xenheap and domheap
> > pages is that xenheap pages are assumed to be always mapped (i.e., you
> > can keep a pointer to them and it will be valid), whereas domheap pages
> > cannot assumed to be mapped, and need to be wrapped with
> > [un]map_domain_page().
> > 
> > The basic solution involves having a xenheap virtual address mapping
> > area not tied to the physical layout of the memory.  domheap and xenheap
> > memory would have to come from the same pool, but xenheap would need to
> > be mapped into the xenheap virtual memory region before being returned.
> 
> Wouldn't this most easily be done by making alloc_xenheap_pages()
> call alloc_domheap_pages() and then vmap() the result? Of course
> we may need to grow the vmap area in that case.

The existing vmap area is 64GB, but that should be big enough for Xen?

If that's not big enough, we need to move that area to a different
location, because it can't expand to either side of the address space.

Wei.

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.