[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.11 1/2] libxc/x86: fix mapping of the start_info area



On Wed, Mar 28, 2018 at 12:08:07PM +0100, Wei Liu wrote:
> On Thu, Mar 22, 2018 at 09:10:03AM +0000, Roger Pau Monné wrote:
> > On Wed, Mar 21, 2018 at 06:09:57PM +0000, Wei Liu wrote:
> > > On Wed, Mar 21, 2018 at 02:42:10PM +0000, Roger Pau Monne wrote:
> > > > The start_info size calculated in bootlate_hvm is wrong. It should use
> > > > HVMLOADER_MODULE_MAX_COUNT instead of dom->num_modules and it doesn't
> > > > take into account the size of the modules command line.
> > > > 
> > > > This is not a problem so far because the actually used amount of
> > > > memory doesn't cross a page boundary, and so no page-fault is
> > > > triggered.
> > > 
> > > I get the cmdline bit.
> > > 
> > > What does it need to be HVMLOADER_MODULE_MAX_COUNT? Isn't better to just
> > > map what we need here?
> > 
> > Because the position of the modules command line is:
> > 
> > modlist_paddr + sizeof(struct hvm_modlist_entry) * 
> > HVMLOADER_MODULE_MAX_COUNT;
> > 
> > (This is from add_module_to_list).
> > 
> > So if dom->num_modules < HVMLOADER_MODULE_MAX_COUNT the mapped region
> > is smaller that what we might end up using.
> > 
> > I'm not sure why HVMLOADER_MODULE_MAX_COUNT is used when allocating
> > memory (in alloc_magic_pages_hvm) instead of the actual number of
> > modules (dom->num_modules), but the proposed change seems to be the
> > easier way to fix the mapping issue.
> > 
> 
> This patch is correct, in the sense that it replicates the logic from
> alloc_magic_pages_hvm to bootlate_hvm. However, I don't think
> bootlate_hvm is in the business of calculating the size once more. This
> is bound to fail in the future.

Agree, the calculation now is fairly simple, yet we have already
failed to replicate it properly.

> Instead, you can stash the size to dom once the calculation in
> alloc_magic_pages_hvm is done, and then use it in bootlate_hvm. This is
> the least fragile way I can think of.

Ack, I think this is correct, and more robust future-wise.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.