[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/dom0: improve PVH initrd and metadata placement


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 4 Mar 2020 11:09:26 +0100
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@xxxxxxxxxx; spf=Pass smtp.mailfrom=roger.pau@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, Wei Liu <wl@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Delivery-date: Wed, 04 Mar 2020 10:09:43 +0000
  • Ironport-sdr: A1VoKXzEoxOpygqbKUK1n3czTz2EeGmeYt3934Rj6FRKlsVwyE3efYerR3g0MhrriRn42ztsLm guODjMWuCPkbOkuZdyi/1pXk+t585yPcE049/8Md2wy//XmORkra/LdXi7DoFnEzSwXyrRLSYe RGrSybISCbyK34dRxLMU1YaD91ikipyohMVkQYmMo3JA23ycUN00myY8LrW66PpKoEukrmZ9NN s0b/YnJCHUvsNUoz0VceRxTcKkRepn1BerVeMjlSbqx7K9kULxRXO7PtHyZbujJ35GsereWNvh L80=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Mar 04, 2020 at 11:00:18AM +0100, Jan Beulich wrote:
> On 04.03.2020 10:53, Roger Pau Monné wrote:
> > On Tue, Mar 03, 2020 at 04:40:36PM +0100, Jan Beulich wrote:
> >> On 03.03.2020 12:52, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/hvm/dom0_build.c
> >>> +++ b/xen/arch/x86/hvm/dom0_build.c
> >>> @@ -490,6 +490,45 @@ static int __init pvh_populate_p2m(struct domain *d)
> >>>  #undef MB1_PAGES
> >>>  }
> >>>  
> >>> +static paddr_t find_memory(const struct domain *d, const struct 
> >>> elf_binary *elf,
> >>> +                           size_t size)
> >>> +{
> >>> +    paddr_t kernel_start = (paddr_t)elf->dest_base;
> >>> +    paddr_t kernel_end = (paddr_t)(elf->dest_base + elf->dest_size);
> >>> +    unsigned int i;
> >>> +
> >>> +    for ( i = 0; i < d->arch.nr_e820; i++ )
> >>> +    {
> >>> +        paddr_t start, end = d->arch.e820[i].addr + d->arch.e820[i].size;
> >>> +
> >>> +        /* Don't use memory below 1MB, as it could overwrite the 
> >>> BDA/EBDA. */
> >>> +        if ( end <= MB(1) || d->arch.e820[i].type != E820_RAM )
> >>> +            continue;
> >>> +
> >>> +        start = MAX(ROUNDUP(d->arch.e820[i].addr, PAGE_SIZE), MB(1));
> >>> +
> >>> +        if ( end <= kernel_start || start >= kernel_end )
> >>> +            ; /* No overlap, nothing to do. */
> >>> +        /* Deal with the kernel already being loaded in the region. */
> >>> +        else if ( kernel_start <= start && kernel_end > start )
> >>
> >> Since, according to your reply on v1, [kernel_start,kernel_end) is
> >> a subset of [start,end), I understand that the <= could equally
> >> well be == - do you agree? From this then ...
> >>
> >>> +            /* Truncate the start of the region. */
> >>> +            start = ROUNDUP(kernel_end, PAGE_SIZE);
> >>> +        else if ( kernel_start <= end && kernel_end > end )
> >>
> >> ... it follows that you now have two off-by-1s here, as you changed
> >> the right side of the && instead of the left one (the right side
> >> could, as per above, use == again). Using == in both places would,
> >> in lieu of a comment, imo make more visible to the reader that
> >> there is this sub-range relationship between both ranges.
> > 
> > Right, I agree to both the above and have adjusted the conditions.
> > 
> >>> +            /* Truncate the end of the region. */
> >>> +            end = kernel_start;
> >>> +        /* Pick the biggest of the split regions. */
> >>
> >> Then again - wouldn't this part suffice? if start == kernel_start
> >> or end == kernel_end, one side of the "split" region would simply
> >> be empty.
> > 
> > That's why it's using an else if construct, so that we only get
> > here if the kernel is loaded in the middle of the region, and there
> > are two regions left as part of the split.
> 
> But my question is - do we really need the earlier parts of
> this if/else-if chain? Won't this latter part deal find with
> zero-sized ranges at head or tail of the region?

Oh, I misread your reply sorry. Yes you are right, I can achieve the
same just with this last part.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.