[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH for-4.12 4/8] x86/shadow: alloc enough pages so initialization doesn't fail
>>> On 05.02.19 at 14:52, <roger.pau@xxxxxxxxxx> wrote: > On Tue, Feb 05, 2019 at 05:55:50AM -0700, Jan Beulich wrote: >> >>> On 05.02.19 at 12:47, <roger.pau@xxxxxxxxxx> wrote: >> > On Tue, Feb 05, 2019 at 04:21:52AM -0700, Jan Beulich wrote: >> >> >>> On 30.01.19 at 11:36, <roger.pau@xxxxxxxxxx> wrote: >> >> > Current code in shadow_enable will allocate a shadow pool of 4MB >> >> > regardless of the values of sh_min_allocation or >> >> > shadow_min_acceptable_pages, which means that calls to >> >> > shadow_alloc_p2m_page can fail even after the check and allocation >> >> > done just above. >> >> > >> >> > Fix this by always checking that the pool is big enough so the rest of >> >> > the shadow_init function cannot fail due to lack of pages in the >> >> > shadow pool. This is relevant to shadow_alloc_p2m_page which requires >> >> > a minimum amount of shadow_min_acceptable_pages(d) + 1 in the pool. >> >> > >> >> > This allows booting a guest using shadow and more than 6 vCPUs. >> >> >> >> I'm routinely booting 8-vCPU guests without issues. >> > >> > For me the following simple example with 8 vcpus doesn't work: >> > >> > # cat test.cfg >> > name = "test" >> > type = "hvm" >> > >> > memory = 256 >> >> I admit I've never tried this small a guest with ... >> >> > vcpus = 8 >> >> ... this many vCPU-s. > > I don't think the amount of guest memory matters here, the following > example with 8G of RAM and 8 vCPUs fails in the same way: > > # cat test.c > test.c test.c.gcov test.cfg test.core > root@:~ # cat test.cfg > name = "test" > type = "hvm" > > memory = 8192 > vcpus = 8 > hap = 0 > # xl create test.cfg > Parsing config from test.cfg > libxl: error: libxl_create.c:578:libxl__domain_make: domain creation fail: > Cannot allocate memory > libxl: error: libxl_create.c:975:initiate_domain_create: cannot make domain: > -3 > > And I think that's a perfectly suitable guest config. Indeed. And it doesn't seem to work for me anymore either. Must be a regression, as I'm pretty sure it did still work not all that long ago. Not even "shadow_memory=256" helps. >> >> > --- a/xen/arch/x86/mm/shadow/common.c >> >> > +++ b/xen/arch/x86/mm/shadow/common.c >> >> > @@ -2705,6 +2705,11 @@ int shadow_enable(struct domain *d, u32 mode) >> >> > uint32_t *e; >> >> > int rv = 0; >> >> > struct p2m_domain *p2m = p2m_get_hostp2m(d); >> >> > + /* >> >> > + * Required minimum amount of pool pages plus 4MB. This is >> >> > required so the >> >> > + * calls to p2m_alloc_table and shadow_alloc_p2m_page below don't >> >> > fail. >> >> > + */ >> >> > + unsigned int min_pages = shadow_min_acceptable_pages(d) + 1024; >> >> >> >> sh_min_allocation() also takes the memory size of the domain into >> >> account. Aren't you therefore risking to regress larger guests by >> >> instead using a fixed amount here? The more that ... >> > >> > shadow_enabled is called by domain_create, and at this point the >> > memory size of the guest is not yet known AFAICT. I assume the >> > toolstack will make further hypercalls to set a suitable sized shadow >> > memory pool after the domain has been created and before populating >> > the physmap. >> >> Hmm, good point, and no, I don't think there are subsequent calls >> to shadow_enable(); at least I can't find an invocation of >> XEN_DOMCTL_SHADOW_OP_ENABLE. > > You can expand the shadow pool using > XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, there's no need to call > XEN_DOMCTL_SHADOW_OP_ENABLE for that. Ah, right. And libxl has a SET_ALLOCATION invocation. > In fact I'm not sure what's the point of XEN_DOMCTL_SHADOW_OP_ENABLE, > since shadow is enabled when the domain is created (domain_create) > with the XEN_DOMCTL_createdomain domctl, and at this point the memory > size of the domain is not yet known by the hypervisor. I guess it served a more significant purpose in the past. > Maybe you can create a HAP or PV domain and turn shadow on > afterwards? HAP - I don't think so. PV - sure, for log-dirty mode. >> But then the correct course of action would be to suitably grow the >> shadow pool as memory gets added to the domain (be it Dom0 or >> a DomU). Sticking to a fixed value of 1024 can't very well be the >> best course of action in all possible cases. > > Right, but it turns out 1024 (4MB) is not suitable given the example > above. I'm open to other options, but IMO this needs to be fixed for > 4.12 in one way or another. Yes, and not just for 4.12, unless this is an issue there only. Not sure what other options you're after; I think I've said what I think would be needed. Or maybe that remark was rather towards others? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |