[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/2] credit: Limit load balancing to once per millisecond


  • To: George Dunlap <george.dunlap@xxxxxxxxx>
  • From: Henry Wang <Henry.Wang@xxxxxxx>
  • Date: Fri, 22 Sep 2023 01:30:07 +0000
  • Accept-language: zh-CN, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6DmG/0PLShl3H4gxcy6xb3Bgu4KUSierfAUVBk9+GNk=; b=coIuWE+WnR+jd5jb/q+ckXt728Z2h/CrUvnvtDiQxjpzgxXvCGfbZ8yak5t04zAPyrxYHf4cq/V/qxqqhMCFjfIKE7gXHv/dg4E2z3MMZPtpxrvUJVh4Tr//jAIfghHN0X+eaaG+D2PX7pkljKRxnPS8nrgGxqp3D1N16sww7GOwcoDnXjWgBaK7uZZ2XtMftbxhEcP9qq4vx/fhw3g/D2HW4JD+jmeFnMK5ub6WZsOza7+sLC7hvAy1aiiQqicr635euuvCUu1gdEqoHC9h+DraW2lJR5k1ONirVbfPG/xHrSJubATzVuEXHK1rzx3Yv1ixwAmDR6HEwfjSJi0Cnw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ow0GjO6NTKyXG7w05HhWrvS9neCY9rwHZx0pvXJvudit5PVT4G4xK87I9BLpuBOiehsoZsKr+yg+uR41bz7h+s4kSVnAzYCgOBzHFI8xXH+uD7Szf2Ek8VoCfHkv+LtkU2LRoCkZhFx1lqRiDTJfm+bS9kgIMauV1F3r7ZHgCmQbFjxZ+qcssIPrwht5I8eK47w5jcN8aHxHWzvv4INX8y9T/GVb+wUJy20PmWuPBo9AzmRCra3K0+nJLOpBUozddR9BT2OUmfJ+e1ONRniRMLIu4nQI0ewPDHR+UphUd9kXCx52p874ak6XRCR/x9WKkBmpLjw7/qPvVTopg4vj8w==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Fri, 22 Sep 2023 01:30:39 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHZ7IaJ6Rl/uO3Tg0W3bsFOfjroMrAmD4qA
  • Thread-topic: [PATCH v2 1/2] credit: Limit load balancing to once per millisecond

Hi George,

> On Sep 21, 2023, at 20:23, George Dunlap <george.dunlap@xxxxxxxxx> wrote:
> 
> The credit scheduler tries as hard as it can to ensure that it always
> runs scheduling units with positive credit (PRI_TS_UNDER) before
> running those with negative credit (PRI_TS_OVER).  If the next
> runnable scheduling unit is of priority OVER, it will always run the
> load balancer, which will scour the system looking for another
> scheduling unit of the UNDER priority.
> 
> Unfortunately, as the number of cores on a system has grown, the cost
> of the work-stealing algorithm has dramatically increased; a recent
> trace on a system with 128 cores showed this taking over 50
> microseconds.
> 
> Add a parameter, load_balance_ratelimit, to limit the frequency of
> load balance operations on a given pcpu.  Default this to 1
> millisecond.
> 
> Invert the load balancing conditional to make it more clear, and line
> up more closely with the comment above it.
> 
> Overall it might be cleaner to have the last_load_balance checking
> happen inside csched_load_balance(), but that would require either
> passing both now and spc into the function, or looking them up again;
> both of which seemed to be worse than simply checking and setting the
> values before calling it.
> 
> On a system with a vcpu:pcpu ratio of 2:1, running Windows guests
> (which will end up calling YIELD during spinlock contention), this
> patch increased performance significantly.
> 
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxx>

Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx>

Kind regards,
Henry



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.