[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 1/7] xen: vNUMA support for PV guests
>>> On 26.11.13 at 22:59, Elena Ufimtseva <ufimtseva@xxxxxxxxx> wrote: > Jan is right, if the guest is running with Linux configured with > maxcpus less than vcpus in VM config, > there is a problem. > > On this boot stage where xen_numa_init is called xen_smp_prepare_cpus > equals to vcpus in config; > It only will be reduced to maxcpus (from kernel boot args) after > xen_numa_init during xen_smp_prepare. > > In xen_numa_init I have all values I need to make a decision in > regards to initialize vnuma or not, or modify. > > These are the numbers I have in xen_numa_init: > num_possible_cpus() = hypervisor provided guest vcpus; > setup_max_cpus = boot kernel param maxcpus; > > When setup_max_cpus > num_possible_cpus, num_possible_cpus will be brought > up; > > I can detect that setup_max_cpus < num_possible_cpus and do not init > vnuma at all, and just do a fake node. > I can also make sure that hypervisor is aware of it (by calling same > subop with NULL, lets suppose). > > Then hypervisor have to make some decision regarding vnuma topology > for this domain. Thus this will be as before, when > guest is not aware of underlying NUMA. It will have to fix > vcpu_to_vnode mask, and possibly adjust pinned vcpus to cpus. > The memory, if allocated on different nodes, will remain as it is. > > Does it sound like sensible solution? Or maybe you have some other ideas? Imo there's nothing the hypervisor should do in reaction to the guest not utilizing all of its assigned vCPU-s - it can't really know whether they're just not being brought up at boot, but may get started later. All we need here is a way for the guest to learn its number of virtual nodes in order to sensibly use the hypercall to obtain its virtual topology information. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |