[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: cpupool: forbid to split cores among different pools


  • To: Dario Faggioli <dfaggioli@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Tue, 21 Aug 2018 10:25:04 +0200
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNHkp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmRlPsLAeQQTAQIAIwUCU4xw6wIbAwcL CQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJELDendYovxMvi4UH/Ri+OXlObzqMANruTd4N zmVBAZgx1VW6jLc8JZjQuJPSsd/a+bNr3BZeLV6lu4Pf1Yl2Log129EX1KWYiFFvPbIiq5M5 kOXTO8Eas4CaScCvAZ9jCMQCgK3pFqYgirwTgfwnPtxFxO/F3ZcS8jovza5khkSKL9JGq8Nk czDTruQ/oy0WUHdUr9uwEfiD9yPFOGqp4S6cISuzBMvaAiC5YGdUGXuPZKXLpnGSjkZswUzY d9BVSitRL5ldsQCg6GhDoEAeIhUC4SQnT9SOWkoDOSFRXZ+7+WIBGLiWMd+yKDdRG5RyP/8f 3tgGiB6cyuYfPDRGsELGjUaTUq3H2xZgIPfOwE0EU4xwFgEIAMsx+gDjgzAY4H1hPVXgoLK8 B93sTQFN9oC6tsb46VpxyLPfJ3T1A6Z6MVkLoCejKTJ3K9MUsBZhxIJ0hIyvzwI6aYJsnOew cCiCN7FeKJ/oA1RSUemPGUcIJwQuZlTOiY0OcQ5PFkV5YxMUX1F/aTYXROXgTmSaw0aC1Jpo w7Ss1mg4SIP/tR88/d1+HwkJDVW1RSxC1PWzGizwRv8eauImGdpNnseneO2BNWRXTJumAWDD pYxpGSsGHXuZXTPZqOOZpsHtInFyi5KRHSFyk2Xigzvh3b9WqhbgHHHE4PUVw0I5sIQt8hJq 5nH5dPqz4ITtCL9zjiJsExHuHKN3NZsAEQEAAcLAXwQYAQIACQUCU4xwFgIbDAAKCRCw3p3W KL8TL0P4B/9YWver5uD/y/m0KScK2f3Z3mXJhME23vGBbMNlfwbr+meDMrJZ950CuWWnQ+d+ Ahe0w1X7e3wuLVODzjcReQ/v7b4JD3wwHxe+88tgB9byc0NXzlPJWBaWV01yB2/uefVKryAf AHYEd0gCRhx7eESgNBe3+YqWAQawunMlycsqKa09dBDL1PFRosF708ic9346GLHRc6Vj5SRA UTHnQqLetIOXZm3a2eQ1gpQK9MmruO86Vo93p39bS1mqnLLspVrL4rhoyhsOyh0Hd28QCzpJ wKeHTd0MAWAirmewHXWPco8p1Wg+V+5xfZzuQY0f4tQxvOpXpt4gQ1817GQ5/Ed/wsDtBBgB CAAgFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrd8NACGwIAgQkQsN6d1ii/Ey92IAQZFggA HRYhBFMtsHpB9jjzHji4HoBcYbtP2GO+BQJa3fDQAAoJEIBcYbtP2GO+TYsA/30H/0V6cr/W V+J/FCayg6uNtm3MJLo4rE+o4sdpjjsGAQCooqffpgA+luTT13YZNV62hAnCLKXH9n3+ZAgJ RtAyDWk1B/0SMDVs1wxufMkKC3Q/1D3BYIvBlrTVKdBYXPxngcRoqV2J77lscEvkLNUGsu/z W2pf7+P3mWWlrPMJdlbax00vevyBeqtqNKjHstHatgMZ2W0CFC4hJ3YEetuRBURYPiGzuJXU pAd7a7BdsqWC4o+GTm5tnGrCyD+4gfDSpkOT53S/GNO07YkPkm/8J4OBoFfgSaCnQ1izwgJQ jIpcG2fPCI2/hxf2oqXPYbKr1v4Z1wthmoyUgGN0LPTIm+B5vdY82wI5qe9uN6UOGyTH2B3p hRQUWqCwu2sqkI3LLbTdrnyDZaixT2T0f4tyF5Lfs+Ha8xVMhIyzNb1byDI5FKCb
  • Delivery-date: Tue, 21 Aug 2018 08:25:13 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 20/08/18 18:43, Dario Faggioli wrote:
> On a system with hyperthreading, we currently allow putting cpus that
> are SMT siblings in different cpupools. This is bad for a number of
> reasons.
> 
> For instance, the schedulers can't know whether or not a core is fully
> idle or not, if the threads of such core are in different pools. This
> right now is a load-balancing/resource-efficiency problem. Furthermore,
> if at some point we want to implement core-scheduling, that is also
> impossible if hyperthreads are split among pools.
> 
> Therefore, let's start allowing in a cpupool only cpus that have their
> SMT siblings, either:
> - in that same pool,
> - outside of any pool.

Can we make this optional somehow? I don't mind this behavior to be the
default, but it should be possible to switch it off.

Otherwise it will be impossible e.g. to test moving cpus between two
cpupools on a machine with only 2 cores.


Juergen

> 
> Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx>
> ---
> Cc: Juergen Gross <jgross@xxxxxxxx>
> ---
>  xen/common/cpupool.c |   34 +++++++++++++++++++++++++++++-----
>  1 file changed, 29 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
> index 1e8edcbd57..1e52fea5ac 100644
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -264,10 +264,24 @@ int cpupool_move_domain(struct domain *d, struct 
> cpupool *c)
>  static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
>  {
>      int ret;
> +    unsigned int s;
>      struct domain *d;
>  
>      if ( (cpupool_moving_cpu == cpu) && (c != cpupool_cpu_moving) )
>          return -EADDRNOTAVAIL;
> +
> +    /*
> +     * If we have SMT, we only allow a new cpu in, if its siblings are either
> +     * in this same cpupool too, or outside of any pool.
> +     */
> +
> +    for_each_cpu(s, per_cpu(cpu_sibling_mask, cpu))
> +    {
> +        if ( !cpumask_test_cpu(s, c->cpu_valid) &&
> +             !cpumask_test_cpu(s, &cpupool_free_cpus) )
> +            return -EBUSY;
> +    }
> +
>      ret = schedule_cpu_switch(cpu, c);
>      if ( ret )
>          return ret;
> @@ -646,18 +660,28 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
>          cpupool_dprintk("cpupool_assign_cpu(pool=%d,cpu=%d)\n",
>                          op->cpupool_id, cpu);
>          spin_lock(&cpupool_lock);
> +        c = cpupool_find_by_id(op->cpupool_id);
> +        ret = -ENOENT;
> +        if ( c == NULL )
> +            goto addcpu_out;
> +        /* Pick a cpu from free cores, or from cores with cpus already in c 
> */
>          if ( cpu == XEN_SYSCTL_CPUPOOL_PAR_ANY )
> -            cpu = cpumask_first(&cpupool_free_cpus);
> +        {
> +            for_each_cpu(cpu, &cpupool_free_cpus)
> +            {
> +                const cpumask_t *siblings = per_cpu(cpu_sibling_mask, cpu);
> +
> +                if ( cpumask_intersects(siblings, c->cpu_valid) ||
> +                     cpumask_subset(siblings, &cpupool_free_cpus) )
> +                    break;
> +            }
> +        }
>          ret = -EINVAL;
>          if ( cpu >= nr_cpu_ids )
>              goto addcpu_out;
>          ret = -ENODEV;
>          if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) )
>              goto addcpu_out;
> -        c = cpupool_find_by_id(op->cpupool_id);
> -        ret = -ENOENT;
> -        if ( c == NULL )
> -            goto addcpu_out;
>          ret = cpupool_assign_cpu_locked(c, cpu);
>      addcpu_out:
>          spin_unlock(&cpupool_lock);
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.