[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 12/16] microcode: split out apply_microcode() from cpu_request_microcode()


  • To: Chao Gao <chao.gao@xxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Mon, 5 Aug 2019 09:38:00 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HlS8qHekmOvt2S+oFL0V3S2CDMt6a03Aw3mNVT72iNE=; b=PfoI5kaQA3+IpBwhVozb9hD3U5GVj7xD9Z7RmgA+3yE3lw17i+W0f6Q598ZXhnbzSpBlptzueVkMtIBfZyzbb3uOarn3JwbOCYTqR3Gj/pQMJERhaFPxEA3o9kLR9WUKY/ibXISppgKeC3wKZP6vCAt204VvyApKIAK6WSno3OaSQWdDRdckSKVd9t4jhwy8hjuAlsWzXKXefPlwNenyW084sOdkvPldN51mOk70m7PvSxvcdaan9kDY5inelguSypudRTRPD1hqSPaFMaqXu3fGe4hJUBzQUmxhzqwZY35vGhfBWls1BD8Pco2CHgTQuBsDPpsTdhVBdqxqr02UJw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g/V1hsY2f6isYqM3M0TbON011zu8uceumsW4lkbgqJ/ihhlMzXw7z8/uguGDuu+ALbig//40ZKHeqGXkmFZ9lyWD//Bg/oInUGuKvYo2Tb8aoExgfPXrVQzErk3UMp0sLHwLwNQJN1udn24o3ZgEddeVgAkjRpGYcELsqIxHObGaeGdzYi8l21ZOsoG6Sq8rqIDfVtjKBs+fjgVBUnJd89Eoz3TVI9uP7qbwF6JNpC8cDYHPLI9OhjQV88U0wAzd+6PXQyElPAb2tGBN2m5UtRyxC4FKR60tinNTyVcdrmqCpw0cEgFw35FITJCr6aQQ8qPnaaQbtDQwWqwJKnj2eQ==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Ashok Raj <ashok.raj@xxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Mon, 05 Aug 2019 09:39:35 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVSFKxe9TG4/M45Uuu/98IT/ohCqboAK8AgAQu8W+AACKrAA==
  • Thread-topic: [Xen-devel] [PATCH v8 12/16] microcode: split out apply_microcode() from cpu_request_microcode()

On 05.08.2019 09:36, Chao Gao wrote:
> On Fri, Aug 02, 2019 at 03:40:55PM +0000, Jan Beulich wrote:
>> On 01.08.2019 12:22, Chao Gao wrote:
>>> --- a/xen/arch/x86/microcode.c
>>> +++ b/xen/arch/x86/microcode.c
>>> @@ -189,12 +189,20 @@ static DEFINE_SPINLOCK(microcode_mutex);
>>>    
>>>    DEFINE_PER_CPU(struct cpu_signature, cpu_sig);
>>>    
>>> -struct microcode_info {
>>> -    unsigned int cpu;
>>> -    uint32_t buffer_size;
>>> -    int error;
>>> -    char buffer[1];
>>> -};
>>> +/*
>>> + * Return a patch that covers current CPU. If there are multiple patches,
>>> + * return the one with the highest revision number. Return error If no
>>> + * patch is found and an error occurs during the parsing process. Otherwise
>>> + * return NULL.
>>> + */
>>> +static struct microcode_patch *microcode_parse_blob(const char *buf,
>>> +                                                    uint32_t len)
>>
>> Btw - you'd have less issues with line length if you omitted the
>> "microcode_" prefix from static functions.
>>
>>> @@ -250,49 +251,88 @@ bool microcode_update_cache(struct microcode_patch 
>>> *patch)
>>>        return true;
>>>    }
>>>    
>>> -static int microcode_update_cpu(const void *buf, size_t size)
>>> +/*
>>> + * Load a microcode update to current CPU.
>>> + *
>>> + * If no patch is provided, the cached patch will be loaded. Microcode 
>>> update
>>> + * during APs bringup and CPU resuming falls into this case.
>>> + */
>>> +static int microcode_update_cpu(const struct microcode_patch *patch)
>>>    {
>>> -    int err;
>>> -    unsigned int cpu = smp_processor_id();
>>> -    struct cpu_signature *sig = &per_cpu(cpu_sig, cpu);
>>> +    int err = microcode_ops->collect_cpu_info(&this_cpu(cpu_sig));
>>> +
>>> +    if ( unlikely(err) )
>>> +        return err;
>>>    
>>>        spin_lock(&microcode_mutex);
>>>    
>>> -    err = microcode_ops->collect_cpu_info(sig);
>>> -    if ( likely(!err) )
>>> -        err = microcode_ops->cpu_request_microcode(buf, size);
>>> +    if ( patch )
>>> +    {
>>> +        /*
>>> +         * If a patch is specified, it should has newer revision than
>>> +         * that of the patch cached.
>>> +         */
>>> +        if ( microcode_cache &&
>>> +             microcode_ops->compare_patch(patch, microcode_cache) != 
>>> NEW_UCODE )
>>
>> While I see that you've taken care of the one case in Intel specific
>> code, this is again a case where I don't think you can validly call
>> this hook in the Intel case. Albeit maybe it is okay, provided that
>> the caller has already verified it against the CPU signature. Then
>> again I wonder why this check gets done here rather than in the
>> caller (next to that other check) anyway. Afaics this way you'd
>> also avoid re-checking on every CPU a CPU-independent property.
> 
> As said in an earlier reply to patch 6, ->compare_patch can
> be simplified a lot. Do you think it is fine to call it here
> with that change?

As said there - yes, this looks to be an option.

> About avoiding re-checking, we should check it with "microcode_mutex"
> held otherwise we cannot ensure nobody is updating the cache. If we want
> to avoid re-checking, then this lock is held for a long time from loading
> on the first core to the last core. And also for early loading and late
> loading, we pass 'NULL' to this function on following CPUs after the
> first successful loading. I am afraid that there is no redundant checking.

There should not be any updating of the cache while one (system-wide)
ucode load operation is in progress, or else you risk leaving the
system in a "mixed ucode versions" state in the end. As said, also
from a logical flow-of-events perspective it seems to me that it should
be the caller to validate applicability of the patch before calling
here. But as also said, I may as well be missing some important aspect
here.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.