[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v15 1/3] mem_sharing: don't reset vCPU info page during fork reset



On Mon, Apr 20, 2020 at 1:45 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
>
> On Fri, Apr 17, 2020 at 10:06:31AM -0700, Tamas K Lengyel wrote:
> > When a forked VM is being reset while having vm_events active, re-copying 
> > the
> > vCPU info page can lead to events being lost. This seems to only affect a
> > subset of events (interrupts), while not others (cpuid, MTF, EPT) thus it 
> > was
>
> I'm slightly lost by the sentence, is the guest or the hypervisor
> the one losing events?
>
> Ie: interrupts are events from a guest PoV, but cpuid or EPT is not
> something that triggers events that are injected to the guest. I think
> the commit message needs clarification.

Sorry, what I meant was software interrupts are not triggered anymore,
ie. int3 and it's associated event is not sent to the monitor
application (VM_EVENT_REASON_SOFTWARE_BREAKPOINT).

>
> > not discovered beforehand. Only copying vCPU info page contents during 
> > initial
> > fork fixes the problem.
>
> Hm, I'm not sure I understand why this is causing issues. When you
> reset a fork you should reset the vcpu info page, or else event masks would
> be in a wrong state?

When we reset a fork we only want to 1) discard any memory allocated
for it 2) reset the vCPU registers. We don't want to reset event
channels or anything else. We have active vm_events on the domain and
the whole point of doing a fork reset is to avoid having to
reinitialize all that as it's quite slow.

> >
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxx>
> > ---
> >  xen/arch/x86/mm/mem_sharing.c | 50 +++++++++++++++++------------------
> >  1 file changed, 25 insertions(+), 25 deletions(-)
> >
> > diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> > index e572e9e39d..4b31a8b8f6 100644
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @@ -1534,28 +1534,6 @@ int mem_sharing_fork_page(struct domain *d, gfn_t 
> > gfn, bool unsharing)
> >                            p2m->default_access, -1);
> >  }
> >
> > -static int bring_up_vcpus(struct domain *cd, struct domain *d)
> > -{
> > -    unsigned int i;
> > -    int ret = -EINVAL;
> > -
> > -    if ( d->max_vcpus != cd->max_vcpus ||
> > -        (ret = cpupool_move_domain(cd, d->cpupool)) )
> > -        return ret;
> > -
> > -    for ( i = 0; i < cd->max_vcpus; i++ )
> > -    {
> > -        if ( !d->vcpu[i] || cd->vcpu[i] )
> > -            continue;
> > -
> > -        if ( !vcpu_create(cd, i) )
> > -            return -EINVAL;
> > -    }
> > -
> > -    domain_update_node_affinity(cd);
> > -    return 0;
> > -}
> > -
> >  static int copy_vcpu_settings(struct domain *cd, const struct domain *d)
> >  {
> >      unsigned int i;
> > @@ -1614,6 +1592,31 @@ static int copy_vcpu_settings(struct domain *cd, 
> > const struct domain *d)
> >      return 0;
> >  }
> >
> > +static int bring_up_vcpus(struct domain *cd, struct domain *d)
> > +{
> > +    unsigned int i;
> > +    int ret = -EINVAL;
> > +
> > +    if ( d->max_vcpus != cd->max_vcpus ||
> > +        (ret = cpupool_move_domain(cd, d->cpupool)) )
> > +        return ret;
> > +
> > +    for ( i = 0; i < cd->max_vcpus; i++ )
> > +    {
> > +        if ( !d->vcpu[i] || cd->vcpu[i] )
> > +            continue;
> > +
> > +        if ( !vcpu_create(cd, i) )
> > +            return -EINVAL;
> > +    }
> > +
> > +    if ( (ret = copy_vcpu_settings(cd, d)) )
> > +        return ret;
> > +
> > +    domain_update_node_affinity(cd);
> > +    return 0;
> > +}
>
> I would prefer the code movement to be in a different patch: it makes
> it more difficult to spot non-functional changes being made while
> moving the function around.

I don't think it's worth splitting this patch because of that. The
patch is quite small an the move is trivial to allow the function
bring_up_vcpus be able to call copy_vcpu_settings without having to
pre-declare that function.

Tamas



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.