[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/4] x86/hvmloader: SMP improvements


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 24 Mar 2023 11:46:46 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PRVft3SlbXDA0bUjEHDKGWJkplWQ8toDcXhXXpDrLuk=; b=O1Hc46/sN71pNQwUiS1BjMVG515UALqut6NMZulXMYG9Qyy4RpjqcULPKD6+Sec0xa/N4t1eJOg/Zjixl/GUmOMkRgpp8ihlmAdYpn2Y329kMAaW0fOcTdoHwqsqARzhQQ7b0g4M0Yjpdlv4r2h48UoQpMQOTyo66WBE7E865UE+lpeI7nScc60nx55KmTzba/VtjetZsx3PLbpvYV/g94C06yWHJO0IGF20CWJIxu9QCks/jB8AU/k500UzxQmb2gL+KV3pBYXINNGxEaNqSNL+uxOjLCpXmUpONP/KnBlJDN1VJm5Bfi7+VH0BUeQyWdaVZPyhdmSUBgtQjS6FJQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=czKDzhxBMfhBrh9XEHc7ka3WrqmIDidgzsX/BINNHM4xMYVgsrBS3m174qZYMpYh0Bt4TfYIEuj6llolI2XERMCx4cH+pcLjNxTHKWhA9RMc/pWULdfoP3qg1/K3rHBAYmViA3aTvoN0T1/+dH1YbehEu23UOady9UspEoJZ6BNUJnFdIVHD/lE6DZ+uXlTHffPvVkIcLWvHl8NVumwuOKLUVESFeDHmK0VBBagCwgS41BR86kHnSHsjPi8QIGU/hOk0q/3A8SXZn61S+QD0Bgxv3Zak20Vhm3n+4RnMRZrQ0GFzgSd5rZYHM7koMuRUwa115//lSWnnWZAEVOof8A==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 24 Mar 2023 11:47:17 +0000
  • Ironport-data: A9a23:RlK5o6os05/YjvsTHCgmiA9FllReBmI6ZBIvgKrLsJaIsI4StFCzt garIBnTOqqMM2L0KNhyYYrk9hsP6pLcztRhSlQ6qC0zQS1E95uZCYyVIHmrMnLJJKUvbq7FA +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weFzSVNVfrzGInqR5fGatgMdgKFb 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay RAXACACSDSovqGS/JW+dLdvgIMONsX7BpxK7xmMzRmBZRonabbqZv2QoOR+hXI3jM0IGuvCb c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeerbIW9lt+iHK25mm6xo G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgr6Mx3wXLnQT/DjVPbga/m93+13fvYNxQF 0tT1zIul7Q9oRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2lRSWSN9mSPSxloetRW62x C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBNrxooWp1 JTcp/Wj0Q==
  • Ironport-hdrordr: A9a23:DGqViKyTsVqRWsZJnX+nKrPxMegkLtp133Aq2lEZdPULSKGlfp GV9sjziyWetN9wYh4dcB67SdC9qADnhPlICO4qTMqftWjdyRGVxeRZgbcKrAeQeBEWmtQtsJ uINpIOc+EYbmIK8/oSgjPZLz9I+rDunsGVbKXlvg9QpGlRGt5dBmxCe2Km+yNNNW977NYCZf ihDp0tnUvdRZ1bVLXyOpFDNNKz1eHjpdbDW1orFhQn4A6BgXeB76P7KQGR2lMzQi5C2rAr9E nCikjc6r+4u/+25xfA3yuLhq4m1OfJ+59mPoihm8IVIjLjhkKBY5lgYaSLuHQYsfyi81Ejlf jLulMFM95o433cU2mpqV/G2hXm0hwp93j+oGXozEfLkIjcfnYXGsBBjYVWfl/w7Fchhsh11O Zu03iCv5RaIBvclGCljuK4HS1Cpw6Rmz4PgOQTh3tQXc83b6JQl5UW+AdwHI0bFCz3xYg7GK 1FDd3a5txRbVSGBkqp9VVH8ZiJZDAeDx2GSk8Ntoi81CVXpmlwyw8iyMkWjh47heUAYqgBw9 6BHrVjlblIQMNTR7l6Hv09Tcy+DXGIaQ7QMUqJSG6XVJ0vCjbokdra8b817OaldNgj150pgq nMV1teqCobZ1/uM8uTx5dGmyq9AVlVZQ6diP222qIJ/4EVHNHQQGm+oREV4oWdSswkc47ms6 3ZAuMQPxfhRVGebbqhkTeOHaW6EkNuI/H9iuxLKm5mnfi7WrECltarBso7d4CdWAoMayfYPk YpegTVCYFp0n2LM0WI9SQ5HUmdNXDCwQ==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 24/08/2022 3:21 pm, Jan Beulich wrote:
> On 24.08.2022 12:59, Andrew Cooper wrote:
>
>> -    ap_callin = 1;
>> +
>> +    /*
>> +     * Call in to the BSP.  For APs, take ourselves offline.
>> +     *
>> +     * We must not use the stack after calling in to the BSP.
>> +     */
>> +    asm volatile (
>> +        "    movb $1, ap_callin          \n"
>> +
>> +        "    test %[icr2], %[icr2]       \n"
>> +        "    jz   .Lbsp                  \n"
>
> Are we intending to guarantee going forward that the BSP always has
> APIC ID zero?

It's currently true, and I doubt that will change, but I prefer the
suggestion to not call this at all on the BSP.

>
>> +        "    movl %[icr2], %[ICR2]       \n"
>> +        "    movl %[init], %[ICR1]       \n"
>> +        "1:  hlt                         \n"
>> +        "    jmp  1b                     \n"
>
> The use of the function for the BSP is questionable anyway. What is
> really needed is the call to cacheattr_init(). I'm inclined to
> suggest to move to something like
>
> void smp_initialise(void)
> {
>    unsigned int i, nr_cpus = hvm_info->nr_vcpus;
>
>    cacheattr_init();
>
>    if ( nr_cpus <= 1 )
>        return;
>
>    memcpy((void *)AP_BOOT_EIP, ap_boot_start, ap_boot_end -
> ap_boot_start);
>
>    printf("Multiprocessor initialisation:\n");
>    for ( i = 1; i < nr_cpus; i++ )
>        boot_cpu(i);
> }
>
> thus eliminating bogus output when there's just one vCPU.
> Then the function here can become noreturn (which I was about to suggest
> until spotting that for the BSP the function actually does return).

Dropping the printk() isn't nice, because you'll then get unqualified
information from cacheattr_init().

I'll see if I can rearrange this a bit more nicely.

>
>> +        ".Lbsp:                          \n"
>> +        :
>> +        : [icr2] "r" (SET_APIC_DEST_FIELD(LAPIC_ID(cpu))),
>> +          [init] "i" (APIC_DM_INIT),
>> +          [ICR1] "m" (*(uint32_t *)(LAPIC_BASE_ADDRESS + APIC_ICR)),
>> +          [ICR2] "m" (*(uint32_t *)(LAPIC_BASE_ADDRESS + APIC_ICR2))
>> +        : "memory" );
>
> Can't you use APIC_DEST_SELF now, avoiding the need to fiddle
> with ICR2?

No.  Fixed is the only message type which can use self or all-inc-self. 
All others are only permitted to use the all-excluding-self.

This makes sense as a consequence of likely shortcuts taking when
integrating the LAPIC into the core.  Either way, it's documented
behaviour now, however inconvenient this is for this case.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.