[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] xen: merge temporary vcpu pinning scenarios


  • To: Juergen Gross <jgross@xxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Tue, 23 Jul 2019 13:26:31 +0100
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Tue, 23 Jul 2019 12:26:44 +0000
  • Ironport-sdr: EfkfqmK/3NyQ1vuWm3GmAPBM64b7oohyPGNUC0H3EyaOg8GIXkzQrO4WdgC14mSvIiTqgJUS8f VLss5vq/YvmtRe2BhsV6ZQOeHwq8LR/3Ek9j7LNPWfzoNIk7oih20/2+3DPMUdFlazD6b5EoLG sOpUL7qLmy6jA8ENnFnShVN0tPYXUFgv+B7Fyhsu1dr0rO38kscKBk6MWSoJoOy7kOKmkC4GDB rjMsASlH14F60OXgQx91Jdne5P465Z2V4Ecx0NHSbVCiVF1RDCLGWIAS7baYo+23LwbZ7hFW5Q 7mw=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 23/07/2019 10:20, Juergen Gross wrote:
> Today there are three scenarios which are pinning vcpus temporarily to
> a single physical cpu:
>
> - NMI/MCE injection into PV domains
> - wait_event() handling
> - vcpu_pin_override() handling
>
> Each of those cases are handled independently today using their own
> temporary cpumask to save the old affinity settings.
>
> The three cases can be combined as the two latter cases will only pin
> a vcpu to the physical cpu it is already running on, while
> vcpu_pin_override() is allowed to fail.
>
> So merge the three temporary pinning scenarios by only using one
> cpumask and a per-vcpu bitmask for specifying which of the three
> scenarios is currently active (they are allowed to nest).
>
> Note that we don't need to call domain_update_node_affinity() as we
> are only pinning for a brief period of time.
>
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> ---
>  xen/arch/x86/pv/traps.c | 20 +-------------------
>  xen/arch/x86/traps.c    |  8 ++------
>  xen/common/domain.c     |  4 +---
>  xen/common/schedule.c   | 35 +++++++++++++++++++++++------------
>  xen/common/wait.c       | 26 ++++++++------------------
>  xen/include/xen/sched.h |  8 +++++---
>  6 files changed, 40 insertions(+), 61 deletions(-)
>
> diff --git a/xen/arch/x86/pv/traps.c b/xen/arch/x86/pv/traps.c
> index 1740784ff2..37dac300ba 100644
> --- a/xen/arch/x86/pv/traps.c
> +++ b/xen/arch/x86/pv/traps.c
> @@ -151,25 +151,7 @@ static void nmi_mce_softirq(void)
>  
>      BUG_ON(st->vcpu == NULL);
>  
> -    /*
> -     * Set the tmp value unconditionally, so that the check in the iret
> -     * hypercall works.
> -     */
> -    cpumask_copy(st->vcpu->cpu_hard_affinity_tmp,
> -                 st->vcpu->cpu_hard_affinity);
> -
> -    if ( (cpu != st->processor) ||
> -         (st->processor != st->vcpu->processor) )
> -    {
> -
> -        /*
> -         * We are on a different physical cpu.  Make sure to wakeup the vcpu 
> on
> -         * the specified processor.
> -         */
> -        vcpu_set_hard_affinity(st->vcpu, cpumask_of(st->processor));
> -
> -        /* Affinity is restored in the iret hypercall. */
> -    }
> +    vcpu_set_tmp_affinity(st->vcpu, st->processor, VCPU_AFFINITY_NMI);

Please can we keep the comment explaining where the affinity is
restored, which is a disguised explanation of why it is PV-only.

> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index 89bc259ae4..d4de74f9c8 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -1106,47 +1106,58 @@ void watchdog_domain_destroy(struct domain *d)
>          kill_timer(&d->watchdog_timer[i]);
>  }
>  
> -int vcpu_pin_override(struct vcpu *v, int cpu)
> +int vcpu_set_tmp_affinity(struct vcpu *v, int cpu, uint8_t reason)
>  {
>      spinlock_t *lock;
>      int ret = -EINVAL;
> +    bool migrate;
>  
>      lock = vcpu_schedule_lock_irq(v);
>  
>      if ( cpu < 0 )
>      {
> -        if ( v->affinity_broken )
> +        if ( v->affinity_broken & reason )
>          {
> -            sched_set_affinity(v, v->cpu_hard_affinity_saved, NULL);
> -            v->affinity_broken = 0;
>              ret = 0;
> +            v->affinity_broken &= ~reason;
>          }
> +        if ( !ret && !v->affinity_broken )
> +            sched_set_affinity(v, v->cpu_hard_affinity_saved, NULL);
>      }
>      else if ( cpu < nr_cpu_ids )
>      {
> -        if ( v->affinity_broken )
> +        if ( (v->affinity_broken & reason) ||
> +             (v->affinity_broken && v->processor != cpu) )
>              ret = -EBUSY;
>          else if ( cpumask_test_cpu(cpu, VCPU2ONLINE(v)) )
>          {
> -            cpumask_copy(v->cpu_hard_affinity_saved, v->cpu_hard_affinity);
> -            v->affinity_broken = 1;
> -            sched_set_affinity(v, cpumask_of(cpu), NULL);
> +            if ( !v->affinity_broken )
> +            {
> +                cpumask_copy(v->cpu_hard_affinity_saved, 
> v->cpu_hard_affinity);
> +                sched_set_affinity(v, cpumask_of(cpu), NULL);
> +            }
> +            v->affinity_broken |= reason;
>              ret = 0;
>          }
>      }
>  
> -    if ( ret == 0 )
> +    migrate = !ret && !cpumask_test_cpu(v->processor, v->cpu_hard_affinity);
> +    if ( migrate )
>          vcpu_migrate_start(v);
>  
>      vcpu_schedule_unlock_irq(lock, v);
>  
> -    domain_update_node_affinity(v->domain);
> -
> -    vcpu_migrate_finish(v);
> +    if ( migrate )
> +        vcpu_migrate_finish(v);
>  
>      return ret;
>  }
>  
> +int vcpu_pin_override(struct vcpu *v, int cpu)

There are exactly two callers of vcpu_pin_override().  I'd take the
opportunity to make vcpu_set_tmp_affinity() the single API call for
adjusting affinity.

> +{
> +    return vcpu_set_tmp_affinity(v, cpu, VCPU_AFFINITY_OVERRIDE);
> +}
> +
>  typedef long ret_t;
>  
>  #endif /* !COMPAT */
> diff --git a/xen/common/wait.c b/xen/common/wait.c
> index 4f830a14e8..9f9ad033b3 100644
> --- a/xen/common/wait.c
> +++ b/xen/common/wait.c
> @@ -182,30 +178,24 @@ static void __prepare_to_wait(struct waitqueue_vcpu 
> *wqv)
>  static void __finish_wait(struct waitqueue_vcpu *wqv)
>  {
>      wqv->esp = NULL;
> -    (void)vcpu_set_hard_affinity(current, &wqv->saved_affinity);
> +    vcpu_set_tmp_affinity(current, -1, VCPU_AFFINITY_WAIT);
>  }
>  
>  void check_wakeup_from_wait(void)
>  {
> -    struct waitqueue_vcpu *wqv = current->waitqueue_vcpu;
> +    struct vcpu *curr = current;
> +    struct waitqueue_vcpu *wqv = curr->waitqueue_vcpu;
>  
>      ASSERT(list_empty(&wqv->list));
>  
>      if ( likely(wqv->esp == NULL) )
>          return;
>  
> -    /* Check if we woke up on the wrong CPU. */
> -    if ( unlikely(smp_processor_id() != wqv->wakeup_cpu) )
> +    /* Check if we are still pinned. */
> +    if ( unlikely(!(curr->affinity_broken & VCPU_AFFINITY_WAIT)) )
>      {
> -        /* Re-set VCPU affinity and re-enter the scheduler. */
> -        struct vcpu *curr = current;
> -        cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
> -        if ( vcpu_set_hard_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
> -        {
> -            gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
> -            domain_crash(current->domain);
> -        }
> -        wait(); /* takes us back into the scheduler */
> +        gdprintk(XENLOG_ERR, "vcpu affinity lost\n");
> +        domain_crash(current->domain);

curr

>      }
>  
>      /*
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index b40c8fd138..721c429454 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -200,7 +200,10 @@ struct vcpu
>      /* VCPU is paused following shutdown request (d->is_shutting_down)? */
>      bool             paused_for_shutdown;
>      /* VCPU need affinity restored */
> -    bool             affinity_broken;
> +    uint8_t          affinity_broken;
> +#define VCPU_AFFINITY_OVERRIDE    0x01
> +#define VCPU_AFFINITY_NMI         0x02

VCPU_AFFINITY_NMI_MCE ?  It is used for more than just NMIs.

~Andrew

> +#define VCPU_AFFINITY_WAIT        0x04
>  
>      /* A hypercall has been preempted. */
>      bool             hcall_preempted;
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.