[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512



On Thu, Mar 01, 2018 at 12:37:57AM -0700, Jan Beulich wrote:
>>>> Chao Gao <chao.gao@xxxxxxxxx> 03/01/18 7:34 AM >>>
>>On Mon, Feb 26, 2018 at 09:10:33AM -0700, Jan Beulich wrote:
>>>Again - here we're talking about implementation limits, not
>>>bottlenecks. So in this context all I'm interested in is whether
>>>(and if so which) implementation limit remains. If an (almost)
>>>arbitrary number is fine, perhaps we'll want to have a Kconfig
>>>option.
>>
>>Do you think that struct hvm_info_table would be a implementation
>>limits? To contain this struct in a single page, the HVM_MAX_VCPUS
>>should be smaller than a value, like (PAGE_SIZE * 8). Supposing
>>it is the only implementation limit, I don't think it is reasonable
>>to set HVM_MAX_VCPUS to that value, because we don't have hardwares to
>>perform tests, even Xeon-phi isn't capable. This value can be bumped
>>when some methods verify a guest can work with more vcpus. Now I
>>prefer 288 over 512 and some values else.
>
>Whether going beyond PAGE_SIZE with the structure size is a valid item
>to think about, but I don't think there's any implied limit from that. But -
>did you read my and George's subsequent reply at all? You continue to

Yes. I did. But somehow I didn't clearly understand the difference.
Sorry for this.

>mix up supported (because of being able to test) limits with implementation
>ones. Even Jürgen's suggestion to take NR_CPUS as the limit is not very
>reasonable - PV guests have an implementation limit of (iirc) 8192. Once
>again - if there's no sensible upper limit imposed by the implementation,
>consider introducing a Kconfig option to pick the limit.

Got it.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.