[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead of native.



On Mon, Jan 06, 2014 at 11:33:00AM +0000, Stefano Stabellini wrote:
> On Sun, 5 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > On Sun, Jan 05, 2014 at 06:11:39PM +0000, Stefano Stabellini wrote:
> > > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > > From: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
> > > > 
> > > > We also optimize one - the TLB flush. The native operation would
> > > > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > > > Xen one avoids that and lets the hypervisor determine which
> > > > VCPU needs the TLB flush.
> > > > 
> > > > Signed-off-by: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> > > > ---
> > > >  arch/x86/xen/mmu.c | 9 +++++++++
> > > >  1 file changed, 9 insertions(+)
> > > > 
> > > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > > index 490ddb3..c1d406f 100644
> > > > --- a/arch/x86/xen/mmu.c
> > > > +++ b/arch/x86/xen/mmu.c
> > > > @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops 
> > > > __initconst = {
> > > >  void __init xen_init_mmu_ops(void)
> > > >  {
> > > >         x86_init.paging.pagetable_init = xen_pagetable_init;
> > > > +
> > > > +       /* Optimization - we can use the HVM one but it has no idea 
> > > > which
> > > > +        * VCPUs are descheduled - which means that it will needlessly 
> > > > IPI
> > > > +        * them. Xen knows so let it do the job.
> > > > +        */
> > > > +       if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > > > +               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > > > +               return;
> > > > +       }
> > > >         pv_mmu_ops = xen_mmu_ops;
> > > >  
> > > >         memset(dummy_mapping, 0xff, PAGE_SIZE);
> > > 
> > > Regarding this patch, the next one and the other changes to
> > > xen_setup_shared_info, xen_setup_mfn_list_list,
> > > xen_setup_vcpu_info_placement, etc: considering that the mmu related
> > > stuff is very different between PV and PVH guests, I wonder if it makes
> > > any sense to keep calling xen_init_mmu_ops on PVH.
> > > 
> > > I would introduce a new function, xen_init_pvh_mmu_ops, that sets
> > > pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
> > > under a new xen_pvh_pagetable_init.
> > > Just to give you an idea, not even compiled tested:
> > 
> > There is something to be said about sharing the same code path
> > that "old-style" PV is using with the new-style - code coverage.
> > 
> > That is the code gets tested under both platforms and if I (or
> > anybody else) introduce a bug in the "common-PV-paths" it will
> > be immediately obvious as hopefully the regression tests
> > will pick it up.
> > 
> > It is not nice - as low-level code is sprinkled with the one-offs
> > for the PVH - which mostly is doing _less_.
> 
> I thought you would say that. However in this specific case the costs

You know me too well :-)

> exceed the benefits. Think of all the times we'll have to debug
> something, we'll be staring at the code, and several dozens of minutes
> later we'll realize that the code we have been looking at all along is
> not actually executed in PVH mode.
> 

For this specific code - that is the shared grants and the hypercalls
I think it needs a bit more testing to make sure suspend/resume works
well. And then this segregation can be done.

My reasoning is that - there might be more code that could benefit
from this - so I could do it in one nice big patchset.

Also the other reasoning of mine for delaying your suggestion
is so that this patchset goes in Linux and doesn't accumulate 20+
patches on top to make the review more daunting.
> 
> > What I was thinking is to flip this around. Make the PVH paths
> > the default and then have something like 'if (!xen_pvh_domain())'
> > ... the big code.
> > 
> > Would you be OK with this line of thinking going forward say
> > after this patchset?
>  
> I am not opposed to it in principle but I don't expect that you'll be
> able to improve things significantly.

The end goal is to take a chainsaw to the code and cut out the
PV-old specific ones. But that is not going to happen now - but rather
in 5 years when we are comfortable with it.

And perhaps even make some #ifdef CONFIG_XEN_PVMMU parts around it
to even further identify the old code.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.