[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate


  • To: Roger Pau Monné <roger@xxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Thu, 28 May 2020 17:19:18 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CQPozITMBLjJhUz1YG14TFpGvIBnvAbTakJjTTPdOhg=; b=Be3om56Cup5ohf0ahk+brnxXpo8rwD4xhdqEB3dJ6oLL32WaaN2aeOg8ML5KW0iNbrye7Jz1RDEWd5yZlhrmlVwtKlUIg/Ewhm+p1+kmYKagjgXrm6WbpxAaHgBmDNvO0f+0/61qzvUTomgjC1q45shXmCwIsJyI9UIZxSaHkxcqpWmQIDbBk9uVhBbYcGXfX8+a7qDOEKocwzkVHQ/R7Tzs+YyjvkmazeiLihi+qQq40GbKrnfA4FDi5+LBD7RElAWheVudBMJcoiLxXJgnkzX2OAtdWNRbNUqTVdjhDtpV5Iem/tBS2nHCVQR6yXjsEYOjjuRzBnri3k5KIrbyFQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JzYg7UEdoFzw5A/rj7+pE8Xmzq4R87Pb+BYt0dOtRZKpUJjwY46Y5dGTmwMoCnxkjSrqygHlasTqy11oz6SQH6hsvoI3Eell4qM04Opnv5ifN0N0v9zGSI+GQHsnoq5Ok9cyw0XCudq1qwfZwoEeOr3pf86IJGRtH0N45oC6PWkROcjaol9Ojb2NlyvTnVim515TmpR5LKca3dc6mTRVDnUW9hUw37cGsTrFUTs0vUg0R37ZQ5AcDGT2Ppcb7IpoQFNCTSSt28muvSSomypY9Z64lH6Fw7uHHbYK+ICqxQI2tgGh+HHcOhgi9zgb3rLfPoe+yjvSF2pRBj9gq5EUAA==
  • Authentication-results-original: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Wei Liu <wl@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, nd <nd@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 28 May 2020 17:19:52 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai9txeAgAAHJ4A=
  • Thread-topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate


> On 28 May 2020, at 17:53, Roger Pau Monné <roger@xxxxxxx> wrote:
> 
> On Thu, May 28, 2020 at 04:25:31PM +0100, Bertrand Marquis wrote:
>> At the moment on Arm, a Linux guest running with KTPI enabled will
>> cause the following error when a context switch happens in user mode:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>> 
>> This patch is modifying runstate handling to map the area given by the
>> guest inside Xen during the hypercall.
>> This is removing the guest virtual to physical conversion during context
>> switches which removes the bug and improve performance by preventing to
>> walk page tables during context switches.
>> 
>> --
>> In the current status, this patch is only working on Arm and needs to
>> be fixed on X86 (see #error on domain.c for missing get_page_from_gva).
>> 
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
>> ---
>> xen/arch/arm/domain.c   | 32 +++++++++-------
>> xen/arch/x86/domain.c   | 51 ++++++++++++++-----------
>> xen/common/domain.c     | 84 ++++++++++++++++++++++++++++++++++-------
>> xen/include/xen/sched.h | 11 ++++--
>> 4 files changed, 124 insertions(+), 54 deletions(-)
>> 
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 31169326b2..799b0e0103 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -278,33 +278,37 @@ static void ctxt_switch_to(struct vcpu *n)
>> /* Update per-VCPU guest runstate shared memory area (if registered). */
>> static void update_runstate_area(struct vcpu *v)
>> {
>> -    void __user *guest_handle = NULL;
>> -    struct vcpu_runstate_info runstate;
>> +    struct vcpu_runstate_info *runstate;
>> 
>> -    if ( guest_handle_is_null(runstate_guest(v)) )
>> +    /* XXX why do we accept not to block here */
>> +    if ( !spin_trylock(&v->runstate_guest_lock) )
> 
> IMO the runstate is not a crucial piece of information, so it's better
> to context switch fast.

Ok I will add a comment there to explain that otherwise it is not obvious why 
simply ignore and continue.

> 
>>         return;
>> 
>> -    memcpy(&runstate, &v->runstate, sizeof(runstate));
>> +    runstate = runstate_guest(v);
>> +
>> +    if (runstate == NULL)
> 
> In general we don't explicitly check for NULL, and you could write
> this as:
> 
>    if ( runstate ) ...
> 
> Note the adding spaces between parentheses and the condition. I would
> also likely check runstate_guest(v) directly and assign to runstate
> afterwards if it's set.

Ok

> 
>> +    {
>> +        spin_unlock(&v->runstate_guest_lock);
>> +        return;
>> +    }
>> 
>>     if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>     {
>> -        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
>> -        guest_handle--;
>> -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - 1, 
>> 1);
>> +        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
>>         smp_wmb();
>> +        v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
>>     }
>> 
>> -    __copy_to_guest(runstate_guest(v), &runstate, 1);
>> +    memcpy(runstate, &v->runstate, sizeof(v->runstate));
>> 
>> -    if ( guest_handle )
>> +    if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>     {
>> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>> +        runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>>         smp_wmb();
> 
> I think you need the barrier before clearing XEN_RUNSTATE_UPDATE from
> the guest version of the runstate info, to make sure writes are not
> re-ordered and hence that the XEN_RUNSTATE_UPDATE flag in the guest
> version is not cleared before the full write has been committed?

Very true. I will fix that.

> 
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - 1, 
>> 1);
>> +        v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>>     }
>> +
>> +    spin_unlock(&v->runstate_guest_lock);
>> }
>> 
>> static void schedule_tail(struct vcpu *prev)
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index 6327ba0790..10c6ceb561 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1642,57 +1642,62 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
>> /* Update per-VCPU guest runstate shared memory area (if registered). */
>> bool update_runstate_area(struct vcpu *v)
>> {
>> -    bool rc;
>>     struct guest_memory_policy policy = { .nested_guest_mode = false };
>> -    void __user *guest_handle = NULL;
>> -    struct vcpu_runstate_info runstate;
>> +    struct vcpu_runstate_info *runstate;
>> 
>> -    if ( guest_handle_is_null(runstate_guest(v)) )
>> +    /* XXX: should we return false ? */
>> +    if ( !spin_trylock(&v->runstate_guest_lock) )
>>         return true;
>> 
>> -    update_guest_memory_policy(v, &policy);
>> +    runstate = runstate_guest(v);
>> 
>> -    memcpy(&runstate, &v->runstate, sizeof(runstate));
>> +    if (runstate == NULL)
>> +    {
>> +        spin_unlock(&v->runstate_guest_lock);
>> +        return true;
>> +    }
>> +
>> +    update_guest_memory_policy(v, &policy);
>> 
>>     if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>     {
>> -        guest_handle = has_32bit_shinfo(v->domain)
>> -            ? &v->runstate_guest.compat.p->state_entry_time + 1
>> -            : &v->runstate_guest.native.p->state_entry_time + 1;
>> -        guest_handle--;
>> -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - 1, 
>> 1);
>> +        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
>>         smp_wmb();
>> +        if (has_32bit_shinfo(v->domain))
>> +            v->runstate_guest.compat->state_entry_time |= 
>> XEN_RUNSTATE_UPDATE;
>> +        else
>> +            v->runstate_guest.native->state_entry_time |= 
>> XEN_RUNSTATE_UPDATE;
> 
> I'm confused here, isn't runstate == v->runstate_guest.native at this
> point?
> 
> I think you want to update v->runstate.state_entry_time here?

I will have to dig deeper on the x86 implementation on that part because the 
compatibility handling is not straight forward.
Currently if the compatibility mode is required both our internal and external 
copy of the runstate are in compatibility mode.

It might be simpler to only handle the compatibility conversion during the 
update_runstate_area instead of doing it everywhere ?
But maybe this should be a change for an other patch (if any).

> 
>>     }
>> 
>>     if ( has_32bit_shinfo(v->domain) )
>>     {
>>         struct compat_vcpu_runstate_info info;
>> -
>>         XLAT_vcpu_runstate_info(&info, &runstate);
>> -        __copy_to_guest(v->runstate_guest.compat, &info, 1);
>> -        rc = true;
>> +        memcpy(v->runstate_guest.compat, &info, 1);
>>     }
>>     else
>> -        rc = __copy_to_guest(runstate_guest(v), &runstate, 1) !=
>> -             sizeof(runstate);
>> +        memcpy(runstate, &v->runstate, sizeof(v->runstate));
>> 
>> -    if ( guest_handle )
>> +    if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>     {
>> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>> +        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
>>         smp_wmb();
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - 1, 
>> 1);
>> +        if (has_32bit_shinfo(v->domain))
>> +            v->runstate_guest.compat->state_entry_time |= 
>> XEN_RUNSTATE_UPDATE;
>> +        else
>> +            v->runstate_guest.native->state_entry_time |= 
>> XEN_RUNSTATE_UPDATE;
> 
> Same comment here related to the usage of runstate_guest instead of
> runstate.

Agree

> 
>>     }
>> 
>> +    spin_unlock(&v->runstate_guest_lock);
>> +
>>     update_guest_memory_policy(v, &policy);
>> 
>> -    return rc;
>> +    return true;
>> }
>> 
>> static void _update_runstate_area(struct vcpu *v)
>> {
>> +    /* XXX: this should be removed */
>>     if ( !update_runstate_area(v) && is_pv_vcpu(v) &&
>>          !(v->arch.flags & TF_kernel_mode) )
>>         v->arch.pv.need_update_runstate_area = 1;
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index 7cc9526139..acc6f90ba3 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -161,6 +161,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int 
>> vcpu_id)
>>     v->dirty_cpu = VCPU_CPU_CLEAN;
>> 
>>     spin_lock_init(&v->virq_lock);
>> +    spin_lock_init(&v->runstate_guest_lock);
>> 
>>     tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
>> 
>> @@ -691,6 +692,66 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, 
>> struct domain **d)
>>     return 0;
>> }
>> 
>> +static void  unmap_runstate_area(struct vcpu *v, unsigned int lock)
> 
> lock wants to be a bool here.
Ok I will fix that.

> 
>> +{
>> +    mfn_t mfn;
>> +
>> +    if ( ! runstate_guest(v) )
>> +        return;
> 
> I think you must check runstate_guest with the lock taken?

Right, I will fix that.

> 
>> +
>> +    if (lock)
>> +        spin_lock(&v->runstate_guest_lock);
>> +
>> +    mfn = domain_page_map_to_mfn(runstate_guest(v));
>> +
>> +    unmap_domain_page_global((void *)
>> +                            ((unsigned long)v->runstate_guest &
>> +                             PAGE_MASK));
>> +
>> +    put_page_and_type(mfn_to_page(mfn));
>> +    runstate_guest(v) = NULL;
>> +
>> +    if (lock)
>> +        spin_unlock(&v->runstate_guest_lock);
>> +}
>> +
>> +static int map_runstate_area(struct vcpu *v,
>> +                    struct vcpu_register_runstate_memory_area *area)
>> +{
>> +    unsigned long offset = area->addr.p & ~PAGE_MASK;
>> +    void *mapping;
>> +    struct page_info *page;
>> +    size_t size = sizeof(struct vcpu_runstate_info);
>> +
>> +    ASSERT(runstate_guest(v) == NULL);
>> +
>> +    /* do not allow an area crossing 2 pages */
>> +    if ( offset > (PAGE_SIZE - size) )
>> +        return -EINVAL;
> 
> I'm afraid this is not suitable, Linux will BUG if
> VCPUOP_register_runstate_memory_area returns an error, and current
> Linux code doesn't check that the area doesn't cross a page
> boundary. You will need to take a reference to the possible two pages
> in that case.

Ok, I will fix that.

> 
>> +
>> +#ifdef CONFIG_ARM
>> +    page = get_page_from_gva(v, area->addr.p, GV2M_WRITE);
>> +#else
>> +    /* XXX how to solve this one ? */
> 
> We have hvm_translate_get_page which seems similar, will need to look
> into this.

Ok I will wait for more information from you on that one.

> 
>> +#error get_page_from_gva is not available on other archs
>> +#endif
>> +    if ( !page )
>> +        return -EINVAL;
>> +
>> +    mapping = __map_domain_page_global(page);
>> +
>> +    if ( mapping == NULL )
>> +    {
>> +        put_page_and_type(page);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    runstate_guest(v) = (struct vcpu_runstate_info *)
>> +        ((unsigned long)mapping + offset);
> 
> There's no need to cast to unsigned long, you can just do pointer
> arithmetic on the void * directly. That should also get rid of the
> cast to vcpu_runstate_info I think.

Some compilers are not allowing arithmetics on void* and gcc will forbid it 
with -pedantic-errors that’s why I was use to write code like that.
I will fix that.

> 
>> +
>> +    return 0;
>> +}
>> +
>> int domain_kill(struct domain *d)
>> {
>>     int rc = 0;
>> @@ -727,7 +788,10 @@ int domain_kill(struct domain *d)
>>         if ( cpupool_move_domain(d, cpupool0) )
>>             return -ERESTART;
>>         for_each_vcpu ( d, v )
>> +        {
>> +            unmap_runstate_area(v, 0);
> 
> Why is it not appropriate here to hold the lock? It might not be
> technically needed, but still should work?

In a killing scenario you might stop a core while it was rescheduling.
Couldn’t a core be killed using a cross core interrupt ?
If this is the case then I would need to do masked interrupt locking sections 
to prevent that case ?

Thanks for the feedback.
Bertrand


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.