[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 05/22] x86/boot/slaunch-early: early TXT checks and boot data retrieval
On 30.05.2025 15:17, Sergii Dmytruk wrote: > --- a/xen/arch/x86/include/asm/intel-txt.h > +++ b/xen/arch/x86/include/asm/intel-txt.h > @@ -93,6 +93,8 @@ > > #ifndef __ASSEMBLY__ > > +#include <xen/slr-table.h> > + > /* Need to differentiate between pre- and post paging enabled. */ > #ifdef __EARLY_SLAUNCH__ > #include <xen/macros.h> > @@ -308,6 +310,116 @@ static inline void *txt_init(void) > return txt_heap; > } > > +static inline int is_in_pmr(const struct txt_os_sinit_data *os_sinit, > + uint64_t base, uint32_t size, int check_high) > +{ > + /* Check for size overflow. */ > + if ( base + size < base ) > + txt_reset(SLAUNCH_ERROR_INTEGER_OVERFLOW); > + > + /* Low range always starts at 0, so its size is also end address. */ > + if ( base >= os_sinit->vtd_pmr_lo_base && > + base + size <= os_sinit->vtd_pmr_lo_size ) If you leverage what the comment says in the 2nd comparsion, why not also in the first (which means that could be dropped altogether)? If the start is always zero, why does the field exist in the first place? > + return 1; > + > + if ( check_high && os_sinit->vtd_pmr_hi_size != 0 ) > + { > + if ( os_sinit->vtd_pmr_hi_base + os_sinit->vtd_pmr_hi_size < > + os_sinit->vtd_pmr_hi_size ) > + txt_reset(SLAUNCH_ERROR_INTEGER_OVERFLOW); > + if ( base >= os_sinit->vtd_pmr_hi_base && > + base + size <= os_sinit->vtd_pmr_hi_base + > + os_sinit->vtd_pmr_hi_size ) > + return 1; > + } > + > + return 0; > +} > + > +static inline void txt_verify_pmr_ranges( > + const struct txt_os_mle_data *os_mle, > + const struct txt_os_sinit_data *os_sinit, > + const struct slr_entry_intel_info *info, > + uint32_t load_base_addr, > + uint32_t tgt_base_addr, > + uint32_t xen_size) > +{ > + int check_high_pmr = 0; Just like Ross mentioned for the return value of is_in_pmr(), this one also looks as if it wanted to be bool. > + /* Verify the value of the low PMR base. It should always be 0. */ > + if ( os_sinit->vtd_pmr_lo_base != 0 ) > + txt_reset(SLAUNCH_ERROR_LO_PMR_BASE); > + > + /* > + * Low PMR size should not be 0 on current platforms. There is an ongoing > + * transition to TPR-based DMA protection instead of PMR-based; this is > not > + * yet supported by the code. > + */ > + if ( os_sinit->vtd_pmr_lo_size == 0 ) > + txt_reset(SLAUNCH_ERROR_LO_PMR_SIZE); > + > + /* Check if regions overlap. Treat regions with no hole between as > error. */ > + if ( os_sinit->vtd_pmr_hi_size != 0 && > + os_sinit->vtd_pmr_hi_base <= os_sinit->vtd_pmr_lo_size ) > + txt_reset(SLAUNCH_ERROR_HI_PMR_BASE); > + > + /* All regions accessed by 32b code must be below 4G. */ > + if ( os_sinit->vtd_pmr_hi_base + os_sinit->vtd_pmr_hi_size <= > + 0x100000000ULL ) > + check_high_pmr = 1; The addition overflowing is only checked later, and that check may be bypassed based on the result here. Is that not a possible problem? > + /* > + * ACM checks that TXT heap and MLE memory is protected against DMA. We > have > + * to check if MBI and whole Xen memory is protected. The latter is done > in > + * case bootloader failed to set whole image as MLE and to make sure that > + * both pre- and post-relocation code is protected. > + */ > + > + /* Check if all of Xen before relocation is covered by PMR. */ > + if ( !is_in_pmr(os_sinit, load_base_addr, xen_size, check_high_pmr) ) > + txt_reset(SLAUNCH_ERROR_LO_PMR_MLE); > + > + /* Check if all of Xen after relocation is covered by PMR. */ > + if ( load_base_addr != tgt_base_addr && > + !is_in_pmr(os_sinit, tgt_base_addr, xen_size, check_high_pmr) ) > + txt_reset(SLAUNCH_ERROR_LO_PMR_MLE); > + > + /* > + * If present, check that MBI is covered by PMR. MBI starts with > 'uint32_t > + * total_size'. > + */ > + if ( info->boot_params_base != 0 && > + !is_in_pmr(os_sinit, info->boot_params_base, > + *(uint32_t *)(uintptr_t)info->boot_params_base, What is this "MBI" which "starts with 'uint32_t total_size'"? Do you perhaps mean multiboot2_fixed_t? If you really can't use a proper structure ref here, please at least mention whatever type that is in our code base, so the use can be found by e.g. grep. > + check_high_pmr) ) > + txt_reset(SLAUNCH_ERROR_BUFFER_BEYOND_PMR); > + > + /* Check if TPM event log (if present) is covered by PMR. */ > + /* > + * FIXME: currently commented out as GRUB allocates it in a hole between > + * PMR and reserved RAM, due to 2MB resolution of PMR. There are no other > + * easy-to-use DMA protection mechanisms that would allow to protect that > + * part of memory. TPR (TXT DMA Protection Range) gives 1MB resolution, > but > + * it still wouldn't be enough. > + * > + * One possible solution would be for GRUB to allocate log at lower > address, > + * but this would further increase memory space fragmentation. Another > + * option is to align PMR up instead of down, making PMR cover part of > + * reserved region, but it is unclear what the consequences may be. > + * > + * In tboot this issue was resolved by reserving leftover chunks of > memory > + * in e820 and/or UEFI memory map. This is also a valid solution, but > would > + * require more changes to GRUB than the ones listed above, as event log > is > + * allocated much earlier than PMRs. > + */ > + /* > + if ( os_mle->evtlog_addr != 0 && os_mle->evtlog_size != 0 && > + !is_in_pmr(os_sinit, os_mle->evtlog_addr, os_mle->evtlog_size, > + check_high_pmr) ) > + txt_reset(SLAUNCH_ERROR_BUFFER_BEYOND_PMR); > + */ > +} > + > #endif /* __ASSEMBLY__ */ These inline functions are pretty large. Why do they need to be inline ones? Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |