[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 03/10] x86/cpuid: Handle leaf 0x1 in guest_cpuid()
>>> On 21.02.17 at 18:29, <andrew.cooper3@xxxxxxxxxx> wrote: > On 21/02/17 17:20, Jan Beulich wrote: >> >>>>> The final 8 bits are the initial legacy APIC ID. For HVM guests, this was >>>>> overridden to vcpu_id * 2. The same logic is now applied to PV guests, so >>>>> guests don't observe a constant number on all vcpus via their emulated or >>>>> faulted view. >>>> They won't be the same everywhere, but every 128th CPU will >>>> share values. I'm therefore not sure it wouldn't be better to hand >>>> out zero or all ones here. >>> There is no case where 128 cpus work sensibly under Xen ATM. >> For HVM you mean. I'm sure I've seen > 128 vCPU PV guests >> (namely Dom0-s). > > You can physically create PV domains with up to 8192 vcpus. I tried > this once. > > The NMI watchdog (even set to 10s) is unforgiving of some the > for_each_vcpu() loops during domain destruction. > > I can also still create workloads in a 64vcpu HVM guest which will cause > a 5 second watchdog timeout, which is why XenServers upper supported > vcpu limit is still 32. Which does not contradict what I've said: I didn't claim 8k-vCPU guests would work well, but I'm pretty convinced ones in the range 128...512 have reasonable chances of working. And we both know the situation sadly is worse for HVM ones. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |