[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4.2] libxc: Defer initialization of start_page for HVM guests



El 08/01/16 a les 16.11, Ian Campbell ha escrit:
> On Fri, 2016-01-08 at 09:53 -0500, Boris Ostrovsky wrote:
>> On 01/08/2016 09:30 AM, Juergen Gross wrote:
>>> On 08/01/16 15:19, Boris Ostrovsky wrote:
>>>> On 01/07/2016 11:57 PM, Juergen Gross wrote:
>>>>> On 07/01/16 23:19, Boris Ostrovsky wrote:
>>>>>> With commit 8c45adec18e0 ("libxc: create unmapped initrd in
>>>>>> domain
>>>>>> builder if supported") location of ramdisk may not be available
>>>>>> to
>>>>>> HVMlite guests by the time alloc_magic_pages_hvm() is invoked if
>>>>>> the
>>>>>> guest supports unmapped initrd.
>>>>>>
>>>>>> So let's move ramdisk info initialization (along with a few other
>>>>>> operations that are not directly related to allocating
>>>>>> magic/special
>>>>>> pages) from alloc_magic_pages_hvm() to bootlate_hvm().
>>>>>>
>>>>>> Since we now split allocation and mapping of the start_info
>>>>>> segment
>>>>>> let's stash it, along with cmdline length, in xc_dom_image so
>>>>>> that we
>>>>>> can check whether we are mapping correctly-sized range.
>>>>>>
>>>>>> We can also stop using xc_dom_image.start_info_pfn and leave it
>>>>>> for
>>>>>> PV(H) guests only.
>>>>>>
>>>>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
>>>>>> ---
>>>>>> v4:
>>>>>>    * See the last two paragraphs from commit message above
>>>>>>
>>>>>> v4.1:
>>>>>>    * Inverted testing of start_info_size in bootlate_hvm().
>>>>>>
>>>>>> v4.2
>>>>>>    * <facepalm> Actually do what I said I'd do in 4.1
>>>>>>
>>>>>>    tools/libxc/include/xc_dom.h |    2 +
>>>>>>    tools/libxc/xc_dom_x86.c     |  140
>>>>>> +++++++++++++++++++++++------------------
>>>>>>    2 files changed, 80 insertions(+), 62 deletions(-)
>>>>>>
>>>>>> diff --git a/tools/libxc/include/xc_dom.h
>>>>>> b/tools/libxc/include/xc_dom.h
>>>>>> index 2460818..cac4698 100644
>>>>>> --- a/tools/libxc/include/xc_dom.h
>>>>>> +++ b/tools/libxc/include/xc_dom.h
>>>>>> @@ -71,6 +71,7 @@ struct xc_dom_image {
>>>>>>          /* arguments and parameters */
>>>>>>        char *cmdline;
>>>>>> +    size_t cmdline_size;
>>>>>>        uint32_t f_requested[XENFEAT_NR_SUBMAPS];
>>>>>>          /* info from (elf) kernel image */
>>>>>> @@ -91,6 +92,7 @@ struct xc_dom_image {
>>>>>>        struct xc_dom_seg p2m_seg;
>>>>>>        struct xc_dom_seg pgtables_seg;
>>>>>>        struct xc_dom_seg devicetree_seg;
>>>>>> +    struct xc_dom_seg start_info_seg; /* HVMlite only */
>>>>> Instead of adding HVM specific members here, you could make use of
>>>>> dom.arch_private and use just a local structure defined in
>>>>> xc_dom_x86.c.
>>>> I did consider this but since we already keep type-specific segments
>>>> in
>>>> this structure (e.g. p2m_seg) decided to add an explicit segment for
>>>> HVMlite.
>>> But p2m_seg is accessed from multiple sources, while cmdline_size and
>>> start_info_seg would be local to xc_dom_x86.c
>>>
>>> BTW: thanks for the hint - I'll have a look whether p2m_seg can't be
>>> moved to arch_private...
>>>
>>>> Besides, I think to properly use it we'd need to add an arch hook and
>>>> IMHO it's not worth the trouble in this case.
>>> Why would you need another arch hook? Just add the arch_private_size
>>> member to struct xc_dom_arch and everything is set up for you. Look
>>> how it is handled for the pv case in xc_dom_x86.c
>>
>> So it is already hooked up, I didn't notice that we do register 
>> xc_hvm_32, even though arch_private_size is 0.
>>
>> This would be a type-specific area though, not arch-specific as the name 
>> implies. So perhaps xc_dom_image_x86 should be modified to include 
>> type-specific structures (via a union).
> 
> You are talking future work here, right? There's no reason not to proceed
> with the current patch AFAICT, I'm really just giving Roger a chance to
> have a look at this point.

LGTM:

Acked-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.