[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH][4.17] EFI: don't convert memory marked for runtime use to ordinary RAM


  • To: Julien Grall <julien@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 6 Oct 2022 10:39:06 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2D6BjJR1XSCo7F+SGErGgKQd6M3dfgKlllcFTmbq9TY=; b=n/75MNJeEtxYljayTMygXjy1J6blIXDiIl7o4k0LwaDlzf9sYE7SugTThnlAyHc4EO9RCPIVmgC1PFsGwUzCCiCv+ejdaKLRfpSAaBbZaS4O3ja/6huV03Pc4OK5gSF9D+ybdMDxD2SNH4K6qqXYrb8pwZUSLDf//RExvdaA74480LFJCULIPhkJJtktSt8ml9Therdf7hh1CZJjLXw/kAm+6dtwLKisKIXP4IJPRqweZFVkNz2ihuyNcR1DSHx9Qqb95Z/Ju9IMB4jABxB3jym3TRgvfocaLhNLeLWDg6Bz7UZL3K2jTRGCyrsq07BelyQvb3Nm+wrxtTXZkMyl8w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BeVQTQRHyLzYIBgmMAc/g0EidRefepeeglbhj0se35pWyZmsWhGOPUmCW4nfog20znUZ11IIC9RxNHkHSl/1sFfdAoL20mFSOeON0te3EDWjWoVDS4eHOFf4Psv6jWmAB01tSQ8SXbJbVfQ3nBhWpvoCEzSOKSEZFs5u7A0u29oIvGlQtfrRz1SvCo0diDoj2Jk8R/zc3fHobteeVIutgscjg0GJes/xCCVaJk7ENewQeTnB9p9bnj9y7bG6ZaArV4hQgEnLFWA2IdnxMkmANRbDH+g0X7PUFS/jG2e7Q5Z8Ki4arB5g245Go/H9tiJTngBP4I6DBqRwp/esmWgWrA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Henry Wang <Henry.Wang@xxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Thu, 06 Oct 2022 08:39:15 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 05.10.2022 20:09, Julien Grall wrote:
> Hi Jan,
> 
> On 05/10/2022 12:55, Jan Beulich wrote:
>> On 05.10.2022 12:44, Julien Grall wrote:
>>> On 04/10/2022 16:58, Jan Beulich wrote:
>>>> On 30.09.2022 14:51, Bertrand Marquis wrote:
>>>>>> On 30 Sep 2022, at 09:50, Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>>>
>>>>>> efi_init_memory() in both relevant places is treating EFI_MEMORY_RUNTIME
>>>>>> higher priority than the type of the range. To avoid accessing memory at
>>>>>> runtime which was re-used for other purposes, make
>>>>>> efi_arch_process_memory_map() follow suit. While on x86 in theory the
>>>>>> same would apply to EfiACPIReclaimMemory, we don't actually "reclaim"
>>>>>> E820_ACPI memory there and hence that type's handling can be left alone.
>>>>>>
>>>>>> Fixes: bf6501a62e80 ("x86-64: EFI boot code")
>>>>>> Fixes: facac0af87ef ("x86-64: EFI runtime code")
>>>>>> Fixes: 6d70ea10d49f ("Add ARM EFI boot support")
>>>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>>>
>>>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@xxxxxxx> #arm
>>>>
>>>> Thanks. However ...
>>>>
>>>>>> ---
>>>>>> Partly RFC for Arm, for two reasons:
>>>>>>
>>>>>> On Arm I question the conversion of EfiACPIReclaimMemory, in two ways:
>>>>>> For one like on x86 such ranges would likely better be retained, as Dom0
>>>>>> may (will?) have a need to look at tables placed there. Plus converting
>>>>>> such ranges to RAM even if EFI_MEMORY_WB is not set looks suspicious to
>>>>>> me as well. I'd be inclined to make the latter adjustment right here
>>>>>> (while the other change probably would better be separate, if there
>>>>>> aren't actually reasons for the present behavior).
>>>>
>>>> ... any views on this WB aspect at least (also Stefano or Julien)? Would be
>>>> good to know before I send v2.
>>>
>>> I don't quite understand what you are questioning here. Looking at the
>>> code, EfiACPIReclaimMemory will not be converted to RAM but added in a
>>> separate array.
>>>
>>> Furthermore, all the EfiACPIReclaimMemory regions will be passed to dom0
>>> (see acpi_create_efi_mmap_table()).
>>>
>>> So to me the code looks correct.
>>
>> Oh, I've indeed not paid enough attention to the first argument passed
>> to meminfo_add_bank(). I'm sorry for the extra noise. However, the
>> question I wanted to have addressed before sending out v3 was that
>> regarding the present using of memory when EFI_MEMORY_WB is not set.
>> Is that correct for the EfiACPIReclaimMemory case, i.e. is the
>> consumer (Dom0) aware that there might be a restriction?
> 
> Looking at the code, we always set EFI_MEMORY_WB for the reclaimable 
> region and the stage-2 mapping will be cachable.
> 
> So it looks like there would be a mismatch if EFI_MEMORY_WB is not set. 
> However, given the region is reclaimable, shouldn't this imply that the 
> flag is always set?

Possibly (but then again consider [perhaps hypothetical] systems with e.g.
just WT caches, where specifying WB simply wouldn't make sense). In any
event, even if that's the case, being on the safe side and doing

        if ( (desc_ptr->Attribute & EFI_MEMORY_RUNTIME) ||
             !(desc_ptr->Attribute & EFI_MEMORY_WB) )
            /* nothing */;
        else if ( ...

would seem better to me. However, if the mapping you mention above
would be adjusted and ...

>> And would
>> this memory then be guaranteed to never be freed into the general pool
>> of RAM pages?
> 
> The region is not treated as RAM by Xen and not owned by the dom0. 
> Therefore, it should not be possible to free the page because 
> get_page_from_gfn() would not be able to get a reference.

... the space cannot become ordinary RAM, then such a precaution
wouldn't be necessary. After all hiding EfiACPIReclaimMemory from
Dom0 just because it can't be mapped WB wouldn't be very nice
either. I guess I'll submit v2 with this part of the change left
as it was.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.