[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 6/6] domain: use PGC_extra domheap page for shared_info



> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 09 March 2020 15:56
> To: paul@xxxxxxx
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Paul Durrant <pdurrant@xxxxxxxxxx>; 
> Stefano Stabellini
> <sstabellini@xxxxxxxxxx>; Julien Grall <julien@xxxxxxx>; Volodymyr Babchuk
> <Volodymyr_Babchuk@xxxxxxxx>; Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; 
> George Dunlap
> <george.dunlap@xxxxxxxxxx>; Ian Jackson <ian.jackson@xxxxxxxxxxxxx>; Konrad 
> Rzeszutek Wilk
> <konrad.wilk@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>
> Subject: Re: [PATCH v5 6/6] domain: use PGC_extra domheap page for shared_info
> 
> On 09.03.2020 11:23, paul@xxxxxxx wrote:
> > From: Paul Durrant <pdurrant@xxxxxxxxxx>
> >
> > Currently shared_info is a shared xenheap page but shared xenheap pages
> > complicate future plans for live-update of Xen so it is desirable to,
> > where possible, not use them [1]. This patch therefore converts shared_info
> > into a PGC_extra domheap page. This does entail freeing shared_info during
> > domain_relinquish_resources() rather than domain_destroy() so care is
> > needed to avoid de-referencing a NULL shared_info pointer hence some
> > extra checks of 'is_dying' are needed.
> >
> > Also, because shared_info will no longer be a xenheap page this patch adds
> > an extra dump function to make sure the shared_info MFN is included in the
> > output of dump_pageframe_info().
> 
> I've voiced my objection to such a model, and hence it'll take
> another REST maintainer to approve of this despite my arguments
> against it. (And of course, just to re-record this here, the
> APIC access page, converted by ea3daabff5f2, will want to get a
> dumping function added then, too.)
> 
> I wonder whether a domain's "extra" pages couldn't be put in a
> separate, singly-linked list, using the union the next_shadow
> field is in as the linking field. None of the other union
> members look to be applicable to "extra" pages.
> 
> > +void dump_shared_info(struct domain *d)
> > +{
> > +    domain_lock(d);
> > +
> > +    if ( d->shared_info.virt )
> > +        printk("Shared Info: %"PRI_mfn"\n", mfn_x(d->shared_info.mfn));
> 
> count_info and type_info should be logged imo, just like
> dump_pageframe_info() does. On the whole I think the actual
> dumping might better be uniform, and these functions would
> then only exist to "know" which page(s) to dump.
> 

I think the extra page list should cover these issues.

> > --- a/xen/common/time.c
> > +++ b/xen/common/time.c
> > @@ -99,6 +99,18 @@ void update_domain_wallclock_time(struct domain *d)
> >      uint32_t *wc_version;
> >      uint64_t sec;
> >
> > +    if ( d != current->domain )
> > +    {
> > +        /*
> > +         * We need to check is_dying here as, if it is set, the
> > +         * shared_info may have been freed. To do this safely we need
> > +         * hold the domain lock.
> > +         */
> > +        domain_lock(d);
> > +        if ( d->is_dying )
> > +            goto unlock;
> > +    }
> 
> This shouldn't happen very often, but it's pretty heavy a lock.
> It's a fundamental aspect of xenheap pages that their disposal
> can b e delay until almost the last moment of guest cleanup. I
> continue to think it's not really a good ideal to have special
> purpose allocation (and mapping) accompanied by these pages
> getting taken care of by the generic relinquish-resources logic
> here (from a more general pov such is of course often nice to
> have). Instead of freeing these pages there, couldn't they just
> be taken off d->page_list, with the unmapping and freeing left
> as it was?

I don't think this can be achieved without being able de-assign pages and I 
don't really want to have to invent new logic to do that (basically 
re-implementing what happens to xenheap pages). I really don't think it is that 
bad to deal with shared info and grant table pages as domheap pages. Yes, we 
have to be careful, but in this case the lock is only taken in a toolstack 
update of the wallclock and, apart from start of day access, I think this is 
limited to XEN_DOMCTL_settimeoffset and XEN_DOMCTL_setvcpucontext neither of 
which I believe is particularly performance-critical.

  Paul

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.