[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512



On 01/03/18 06:21, Chao Gao wrote:
> On Mon, Feb 26, 2018 at 09:10:33AM -0700, Jan Beulich wrote:
>>>>> On 26.02.18 at 14:11, <chao.gao@xxxxxxxxx> wrote:
>>> On Mon, Feb 26, 2018 at 01:26:42AM -0700, Jan Beulich wrote:
>>>>>>> On 23.02.18 at 19:11, <roger.pau@xxxxxxxxxx> wrote:
>>>>> On Wed, Dec 06, 2017 at 03:50:14PM +0800, Chao Gao wrote:
>>>>>> Signed-off-by: Chao Gao <chao.gao@xxxxxxxxx>
>>>>>> ---
>>>>>>  xen/include/public/hvm/hvm_info_table.h | 2 +-
>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/xen/include/public/hvm/hvm_info_table.h 
>>>>> b/xen/include/public/hvm/hvm_info_table.h
>>>>>> index 08c252e..6833a4c 100644
>>>>>> --- a/xen/include/public/hvm/hvm_info_table.h
>>>>>> +++ b/xen/include/public/hvm/hvm_info_table.h
>>>>>> @@ -32,7 +32,7 @@
>>>>>>  #define HVM_INFO_PADDR       ((HVM_INFO_PFN << 12) + HVM_INFO_OFFSET)
>>>>>>  
>>>>>>  /* Maximum we can support with current vLAPIC ID mapping. */
>>>>>> -#define HVM_MAX_VCPUS        128
>>>>>> +#define HVM_MAX_VCPUS        512
>>>>>
>>>>> Wow, that looks like a pretty big jump. I certainly don't have access
>>>>> to any box with this number of vCPUs, so that's going to be quite hard
>>>>> to test. What the reasoning behind this bump? Is hardware with 512
>>>>> ways expected soon-ish?
>>>>>
>>>>> Also osstest is not even able to test the current limit, so I would
>>>>> maybe bump this to 256, but as I expressed in other occasions I don't
>>>>> feel comfortable with have a number of vCPUs that the current test
>>>>> system doesn't have hardware to test with.
>>>>
>>>> I think implementation limit and supported limit need to be clearly
>>>> distinguished here. Therefore I'd put the question the other way
>>>> around: What's causing the limit to be 512, rather than 1024,
>>>> 4096, or even 4G-1 (x2APIC IDs are 32 bits wide, after all)?
>>>
>>> TBH, I have no idea. When I choose a value, what comes up to my mind is
>>> that the value should be 288, because Intel has Xeon-phi platform which
>>> has 288 physical threads, and some customers wants to use this new platform
>>> for HPC cloud. Furthermore, they requests to support a big VM in which
>>> almost computing and device resources are assigned to the VM. They just
>>> use virtulization technology to manage the machines. In this situation,
>>> I choose 512 is because I feel much better if the limit is a power of 2.
>>>
>>> You are asking that as these patches remove limitations imposed by some
>>> components, which one is the next bottleneck and how many vcpus does it
>>> limit.  Maybe it would be the use-case. No one is requesting to support
>>> more than 288 at this moment. So what is the value you prefer? 288 or
>>> 512? or you think I should find the next bottleneck in Xen's
>>> implementation.
>>
>> Again - here we're talking about implementation limits, not
>> bottlenecks. So in this context all I'm interested in is whether
>> (and if so which) implementation limit remains. If an (almost)
>> arbitrary number is fine, perhaps we'll want to have a Kconfig
>> option.
> 
> Do you think that struct hvm_info_table would be a implementation
> limits? To contain this struct in a single page, the HVM_MAX_VCPUS
> should be smaller than a value, like (PAGE_SIZE * 8). Supposing
> it is the only implementation limit, I don't think it is reasonable
> to set HVM_MAX_VCPUS to that value, because we don't have hardwares to
> perform tests, even Xeon-phi isn't capable. This value can be bumped
> when some methods verify a guest can work with more vcpus. Now I
> prefer 288 over 512 and some values else.
> 
>>
>> I'm also curious - do Phis not come in multi-socket configs? It's
>> my understanding that 288 is the count for a single socket.
> 
> Currently we don't have. But it's hard to say for future products.

Is there any reason to set HVM_MAX_VCPUS to a lower limit than
CONFIG_NR_CPUS? This can be set to 4095, so why not use the same
limit for HVM_MAX_VCPUS?


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.