[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: xentrace buffer size, maxcpus and online cpus


  • To: Olaf Hering <olaf@xxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 16 Jun 2023 17:08:25 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XcrUm6zFccncdK8W5vds6X7HaE8nBIvjaXpviGg7bmM=; b=N7iDi7Zk9nvN1rOIwERmMo5XvH0vilt+JIgcysicdAHquU9YB8fqmR0oqpsRPnWclTy0U5AR9WKBLhBCEfh3T7c/IZP7GJkKsRtTiAEtEMYhIHfCd3NrTgsxAEBilgLQVTFvMgyMsHYkQ+UBGcpXnXbn7jaREjmiwLtn5CFIhgPf505atFKHeHHKeBnT2NDU7bMd0eY8KAA+9Wc6DTvVBmRMsqz6mNIuvDkt8JT9FiucXbXT+2maF0d1zzzN9gl9qbbbpMAWr4zwxslehzLdTAZz+FsfZh0cqk3Tr0R725HVC9PraSS04QnJ7A9vYAuTvkwRjBbQRUttALiHLYXI/A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PyjkNa/JFSh5+pIW6IEFvXBrLctq8VaDki/JU4Q5xltaTPpV+AtKBEkTGhKa14imFv9X3zgoAeIKobUw70ST8fT3ApAvAF9DeeS6y/4rR6AReb/O4ZN10LkDRZ71tFX1SQZNIHne1aNF/qXhWmivqf+7PugjFmwtqqT0uHicj64StZCkAH7DzjWKzWmwPOTGiP2LnSwaEaMSug1EK+jRoUYqWu4uWL4+xh3nnvZn8yVHEBAwh5iRGrhPb2sok5Ni8TZJ46eSVZuZFY030JjSTHSBSU96zkBPtoEIeKP0EY959CzTncoyx5Jqyt5Q26VC7LjdPyeKUSUb89RYvaSg1Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Juergen Gross <jgross@xxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>
  • Delivery-date: Fri, 16 Jun 2023 16:08:55 +0000
  • Ironport-data: A9a23:Ptk+rqDCbyCDuhVW//riw5YqxClBgxIJ4kV8jS/XYbTApG5w0zNWm GFNW2CHOP2KZGvxKt5waY+woR4PuJHdn4NmQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G1B4ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwxv5IKlFo9 6EiBmoVczqnuOKUxuj8Vbw57igjBJGD0II3nFhFlGicJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9exuvTm7IA9ZidABNPL8fNCQSNoTtUGfv m/cpEzyAw0ANczZwj2Amp6prraWxHOgBtxJRdVU8NZ0slGa+mAfVScwdn+hn6f+g3KzC91Af hl8Fi0G6PJaGFaQZt75VhOQqXOcsBoRHdZde8U15QaXxaeS7xufAmEcZjVFb8Eq8sQxQFQC1 FWEgtfoDjxHq6CORDSW8bL8hSO/P20ZIHEPYQcATBAZ+J/zrYcrlBXNQ91/VqmvgbXdGz7qx CuRhDMjnLhVhskOv5hX5njCijOo45LPHgg841yOWnr/t10oIom4e4av9F7Xq+5aK5qURUWAu 35CnNWC6OcJDteGkynlrPgxIYxFLs2taFX06WOD1bF6n9hx0xZPpbxt3Qw=
  • Ironport-hdrordr: A9a23:BOaNX6hxdnf5JJBByFekvpHzunBQXh4ji2hC6mlwRA09TyX5ra 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16/06/2023 4:37 pm, Olaf Hering wrote:
> Fri, 16 Jun 2023 15:22:24 +0100 George Dunlap <george.dunlap@xxxxxxxxx>:
>
>> I agree; the clear implication is that with smt=0, you might have
>> num_online_cpus() return 4, but cpuids that look like {1, 3, 5, 7}; so you
>> need the trace buffer array to be of size 8.
> I see. Apparently some remapping is required to map a cpuid to an index
> into the trace buffer metadata.

The xentrace mapping interface is horrible, and makes a lot of
assumptions which date from the early PV-only days.

If you want to improve things, we've got all the building blocks now for
a much more sane interface.

XENMEM_acquire_resource is a new mapping interface with far more sane
semantics which, amongst other things, will work in PVH guests too.

If we specify a new mapping space of type xentrace, using cpu id's as
the sub-index space (see vmtrace as an example), then you'll remove that
entire opencoded mechanism of passing mfns around, as well as reducing
the number of unstable hypercalls that the xentrace infrastructure uses.

I can talk you through it further if you feel up to tackling this.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.