[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2] Add SUPPORT.md



On 11/09/17 18:01, George Dunlap wrote:
> +### x86/PV
> +
> +    Status: Supported
> +
> +Traditional Xen Project PV guest

What's a "Xen Project" PV guest?  Just Xen here.

Also, a perhaps a statement of "No hardware requirements" ?

> +### x86/RAM
> +
> +    Limit, x86: 16TiB
> +    Limit, ARM32: 16GiB
> +    Limit, ARM64: 5TiB
> +
> +[XXX: Andy to suggest what this should say for x86]

The limit for x86 is either 16TiB or 123TiB, depending on
CONFIG_BIGMEM.  CONFIG_BIGMEM is exposed via menuconfig without
XEN_CONFIG_EXPERT, so falls into at least some kind of support statement.

As for practical limits, I don't think its reasonable to claim anything
which we can't test.  What are the specs in the MA colo?

> +
> +## Limits/Guest
> +
> +### Virtual CPUs
> +
> +    Limit, x86 PV: 512

Where did this number come from?  The actual limit as enforced in Xen is
8192, and it has been like that for a very long time (i.e. the 3.x days)

[root@fusebot ~]# python
Python 2.7.5 (default, Nov 20 2015, 02:00:19)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from xen.lowlevel.xc import xc as XC
>>> xc = XC()
>>> xc.domain_create()
1
>>> xc.domain_max_vcpus(1, 8192)
0
>>> xc.domain_create()
2
>>> xc.domain_max_vcpus(2, 8193)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
xen.lowlevel.xc.Error: (22, 'Invalid argument')

Trying to shut such a domain down however does tickle a host watchdog
timeout as the for_each_vcpu() loops in domain_kill() are very long.

> +    Limit, x86 HVM: 128
> +    Limit, ARM32: 8
> +    Limit, ARM64: 128
> +
> +[XXX Andrew Cooper: Do want to add "Limit-Security" here for some of these?]

32 for each.  64 vcpu HVM guests can excerpt enough p2m lock pressure to
trigger a 5 second host watchdog timeout.

> +
> +### Virtual RAM
> +
> +    Limit, x86 PV: >1TB
> +    Limit, x86 HVM: 1TB
> +    Limit, ARM32: 16GiB
> +    Limit, ARM64: 1TB

There is no specific upper bound on the size of PV or HVM guests that I
am aware of.  1.5TB HVM domains definitely work, because that's what we
test and support in XenServer.

> +
> +### x86 PV/Event Channels
> +
> +    Limit: 131072

Why do we call out event channel limits but not grant table limits? 
Also, why is this x86?  The 2l and fifo ABIs are arch agnostic, as far
as I am aware.

> +## High Availability and Fault Tolerance
> +
> +### Live Migration, Save & Restore
> +
> +    Status, x86: Supported

With caveats.  From docs/features/migration.pandoc

* x86 HVM guest physmap operations (not reflected in logdirty bitmap)
* x86 HVM with PoD pages (attempts to map cause PoD allocations)
* x86 HVM with nested-virt (no relevant information included in the stream)
* x86 PV ballooning (P2M marked dirty, target frame not marked)
* x86 PV P2M structure changes (not noticed, stale mappings used) for
  guests not using the linear p2m layout

Also, features such as vNUMA and nested virt (which are two I know for
certain) have all state discarded on the source side, because they were
never suitably plumbed in.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.