[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] tools/init-xenstore-domain: fix memory map for PVH stubdom


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Anthony PERARD <anthony.perard@xxxxxxxxxx>
  • Date: Thu, 7 Jul 2022 15:45:36 +0100
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Thu, 07 Jul 2022 14:46:30 +0000
  • Ironport-data: A9a23:hV8MwKJi4BMw2bZtFE+RhpUlxSXFcZb7ZxGr2PjKsXjdYENS1GEOz 2QdXz+Bbq2OMzHxeNxzYI218kIAsMKDmt5jTQdlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3dY32YLR7z6l4 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4 I1WkaCXSFotB4nFgsccCQNbA2ZhIqITrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq 6ZecmpUKEne2aTmm9pXScE17ignBMDtIIMYvGAm1TzDBOwqaZvCX7/L9ZlT2zJYasVmQquHP pFFMWcHgBLoJAZEIVQNA5QHrsj5h3jaUDAIkQiQjP9ii4TU5FMoi+W8WDbPQfSVQe1Fk0Deo XjJl0zpDxdfONGBxD6t9nO3mvSJjS79QJgVFrCz6rhtmlL77m4cEhoNTnOgvOK0zEW5Xrpix 1c8o3R06/JorQryE4e7D0bQTGO4UgA0A8F0L8o7tQW07qPOwAfDHGsUCSFeZ4lz3CMpfgDGx mNljvuwW2Ew6eXNGS/HnluHhWjsYHZIdAfucQdBFFJYuIe7/enfmzqVFr5e/LiJYsoZ8N0a6 xSDt2AAiroalqbnPI3rrAmc01pASnUkJzPZBzk7vUr/t2uVnKb/O+SVBaHztJ6s1rqxQFibp 2QjkMOD9u0IBpzlvHXTHbpUR+v3vqjZame0bbtT834Jrm3FxpJeVdoIvGEWyLlBaK7ohgMFk GeM4FgMtfe/zVOhbLNtYpLZNvnGOZPITIy/PtiNN4ImSsEoKGevoXA1DWbNjj+FraTZufxmU XttWZ30XShy5GUO5GfeetrxJpdynX1imzyPGcGTItbO+eP2WUN5gIwtaDOmBt3VJoveyOkJ2 76z7/e39ig=
  • Ironport-hdrordr: A9a23:0rZIAa2EO2d1ImTDILYtVAqjBLIkLtp133Aq2lEZdPRUGvb3qy mLpoV+6faUskd1ZJhOo7290cW7LU80sKQFhrX5Xo3SPjUO2lHJEGgK1+KLqFfd8m/Fh41gPM 9bAs5D4bbLbGSS4/yU3DWF
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Jun 24, 2022 at 11:28:06AM +0200, Juergen Gross wrote:
> In case of maxmem != memsize the E820 map of the PVH stubdom is wrong,
> as it is missing the RAM above memsize.
> 
> Additionally the MMIO area should only cover the HVM special pages.
> 
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> ---
>  tools/helpers/init-xenstore-domain.c | 16 ++++++++++------
>  1 file changed, 10 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/helpers/init-xenstore-domain.c 
> b/tools/helpers/init-xenstore-domain.c
> index b4f3c65a8a..dad8e43c42 100644
> --- a/tools/helpers/init-xenstore-domain.c
> +++ b/tools/helpers/init-xenstore-domain.c
> @@ -157,21 +158,24 @@ static int build(xc_interface *xch)
>          config.flags |= XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap;
>          config.arch.emulation_flags = XEN_X86_EMU_LAPIC;
>          dom->target_pages = mem_size >> XC_PAGE_SHIFT;
> -        dom->mmio_size = GB(4) - LAPIC_BASE_ADDRESS;
> +        dom->mmio_size = X86_HVM_NR_SPECIAL_PAGES << XC_PAGE_SHIFT;
>          dom->lowmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
>                            LAPIC_BASE_ADDRESS : mem_size;
>          dom->highmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
>                             GB(4) + mem_size - LAPIC_BASE_ADDRESS : 0;
> -        dom->mmio_start = LAPIC_BASE_ADDRESS;
> +        dom->mmio_start = (X86_HVM_END_SPECIAL_REGION -
> +                           X86_HVM_NR_SPECIAL_PAGES) << XC_PAGE_SHIFT;
>          dom->max_vcpus = 1;
>          e820[0].addr = 0;
> -        e820[0].size = dom->lowmem_end;
> +        e820[0].size = (max_size > LAPIC_BASE_ADDRESS) ?
> +                       LAPIC_BASE_ADDRESS : max_size;
>          e820[0].type = E820_RAM;
> -        e820[1].addr = LAPIC_BASE_ADDRESS;
> +        e820[1].addr = dom->mmio_start;


So, it isn't expected to have an entry covering the LAPIC addresses,
right? I guess not as seen in df1ca1dfe20.

But based on that same commit info, shouldn't the LAPIC address part of
`dom->mmio_start, dom->mmio_size`? (I don't know how dom->mmio_start is
used, yet, but maybe it's used by Xen or xen libraries to avoid
allocations in the wrong places)

Thanks,

-- 
Anthony PERARD



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.