[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] Add SUPPORT.md



On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
> Add a machine-readable file to describe what features are in what
> state of being 'supported', as well as information about how long this
> release will be supported, and so on.
> 
> The document should be formatted using "semantic newlines" [1], to make
> changes easier.
> 
> Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxx>
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
> 
> [1] http://rhodesmill.org/brandon/2012/one-sentence-per-line/
> ---
> 
> Definitely meant to be a draft; if you disagree with the status of one
> of these features, now is the time to suggest something else.
> 
> I've made a number of stylistic decisions that people may have opinions on:
> 
> * When dealing with multiple implementations of the same feature (for
>   instance, x86/PV x86/HVM and ARM guest types, or Linux / FreeBSD /
>   QEMU backends), I decided in general to combine the feature itself
>   into a single stanza, and break the 'Status' line up by specifying
>   the implementation.
> 
>   For example, if a feature is supported on x86 but tech preview on
>   ARM, there would be two status lines, thus:
> 
>     Status, x86: Supported
>     Status, ARM: Tech preview
> 
>   If a feature is not implemented for a specific implementation, it
>   will simply not be listed:
> 
>     Status, x86: Supported
> 
> * I've added common 'Support variations' to the bottom of the document
> 
> Thinking on support status of specific features:
> 
> gdbsx security support: Someone may want to debug an untrusted guest,
> so I think we should say 'yes' here.
> 
> xentrace: Users may want to trace guests in production environments,
> so I think we should say 'yes'.
> 
> gcov: No good reason to run a gcov hypervisor in a production
> environment.  May be ways for a rogue guest to DoS.
> 
> memory paging: Changed to experimental -- are we testing it at all?
> 
> alternative p2m: No security support until better testing in place
> 
> ARINC653 scheduler: Not sure we have the expertise to properly fix
> bugs.  Can switch to 'supported' if we get commitment from
> maintainers.
> 
> vMCE: Is MCE an x86-only thing, or could this conceivably by extended
> to ARM?
> 
> PVHv2: Not sure why we'd downgrade guest support to 'experimental'.
> 
> ARM/Virtual RAM: Not sure what the note 'Limited by supported host
> memory' was supposed to mean
> 
> CC: Ian Jackson <ian.jackson@xxxxxxxxxx>
> CC: Wei Liu <wei.liu2@xxxxxxxxxx>
> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> CC: Jan Beulich <jbeulich@xxxxxxxx>
> CC: Tim Deegan <tim@xxxxxxx>
> CC: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> CC: Tamas K Lengyel <tamas.lengyel@xxxxxxxxxxxx>
> CC: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> CC: Anthony Perard <anthony.perard@xxxxxxxxxx>
> CC: Paul Durrant <paul.durrant@xxxxxxxxxx>
> CC: Konrad Wilk <konrad.wilk@xxxxxxxxxx>
> ---
>  SUPPORT.md | 770 
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 770 insertions(+)
>  create mode 100644 SUPPORT.md
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> new file mode 100644
> index 0000000000..283cbeb725
> --- /dev/null
> +++ b/SUPPORT.md
> @@ -0,0 +1,770 @@
> +# Support statement for this release
> +
> +This document describes the support status and in particular the
> +security support status of the Xen branch within which you find it.
> +
> +See the bottom of the file for the definitions of the support status
> +levels etc.
> +
> +# Release Support
> +
> +    Xen-Version: 4.10-unstable
> +    Initial-Release: n/a
> +    Supported-Until: TBD
> +    Security-Support-Until: Unreleased - not yet security-supported
> +
> +# Feature Support
> +
> +## Host Architecture
> +
> +### x86-64
> +
> +    Status: Supported
> +
> +### ARM v7 + Virtualization Extensions
> +
> +    Status: Supported
> +
> +### ARM v8
> +
> +    Status: Supported
> +
> +## Guest Type
> +
> +### x86/PV
> +
> +    Status: Supported
> +
> +Traditional Xen Project PV guest
> +
> +### x86/HVM
> +
> +    Status: Supported
> +
> +Fully virtualised guest using hardware virtualisation extensions
> +
> +Requires hardware virtualisation support
> +
> +### x86/PV-on-HVM

Do we really consider this a guest type? From both Xen and the
toolstack PoV this is just a HVM guest. What's more, I'm not really
sure xl/libxl has the right options to create a HVM guest _without_
exposing any PV interfaces.

Ie: can a HMV guest without PV timers and PV event channels
actually be created? Or even without having the MSR to initialize the
hypercall page.

> +
> +    Status: Supported
> +
> +Fully virtualised guest using PV extensions/drivers for improved performance
> +
> +Requires hardware virtualisation support
> +
> +### x86/PVH guest
> +
> +    Status: Preview
> +
> +PVHv2 guest support
> +
> +Requires hardware virtualisation support
> +
> +### x86/PVH dom0
              ^ v2
> +
> +    Status: Experimental

The status of this is just "not finished". We need at least the PCI
emulation series for having a half-functional PVHv2 Dom0.

> +
> +PVHv2 domain 0 support
> +
> +### ARM guest
> +
> +    Status: Supported
> +
> +ARM only has one guest type at the moment
> +
> +## Limits/Host
> +
> +### CPUs
> +
> +    Limit, x86: 4095
> +    Limit, ARM32: 8
> +    Limit, ARM64: 128
> +
> +Note that for x86, very large number of cpus may not work/boot,
> +but we will still provide security support
> +
> +### x86/RAM
> +
> +    Limit, x86: 16TiB
> +    Limit, ARM32: 16GiB
> +    Limit, ARM64: 5TiB
> +
> +[XXX: Andy to suggest what this should say for x86]
> +
> +## Limits/Guest
> +
> +### Virtual CPUs
> +
> +    Limit, x86 PV: 512
> +    Limit, x86 HVM: 128

There has already been some discussion about the HVM vCPU limit due to
other topics, is Xen really compromised on providing security support
for this case?

I would very much like to have a host in osstest capable of creating
such a guest, plus maybe some XTF tests to stress it.

> +    Limit, ARM32: 8
> +    Limit, ARM64: 128
> +
> +### x86/PV/Virtual RAM
       ^ This seems wrong, "Guest RAM" maybe?
> +
> +    Limit, x86 PV: >1TB

 > 1TB? that seems kind of vaguee.

> +    Limit, x86 HVM: 1TB
> +    Limit, ARM32: 16GiB
> +    Limit, ARM64: 1TB
> +
> +### x86 PV/Event Channels
> +
> +    Limit: 131072
> +
> +## Toolstack
> +
> +### xl
> +
> +    Status: Supported
> +
> +### Direct-boot kernel image format
> +
> +    Supported, x86: bzImage

This should be:

Supported, x86: bzImage, ELF

FreeBSD kernel is just a plain ELF binary that's loaded using
libelf. It should also be suitable for ARM, but I have no idea whether
it has been tested on ARM at all.

> +    Supported, ARM32: zImage
> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
> +
> +Format which the toolstack accept for direct-boot kernels
> +
> +### Qemu based disk backend (qdisk) for xl
> +
> +    Status: Supported
> +
> +### Open vSwitch integration for xl
> +
> +    Status: Supported
> +
> +### systemd support for xl
> +
> +    Status: Supported
> +
> +### JSON support for xl
> +
> +    Status: Preview
> +
> +### AHCI support for xl
> +
> +    Status, x86: Supported
> +
> +### ACPI guest
> +
> +    Status, ARM: Preview
       Status: Supported

HVM guests have been using ACPI for a long time on x86.

> +
> +### PVUSB support for xl
> +
> +    Status: Supported
> +
> +### HVM USB passthrough for xl
> +
> +    Status, x86: Supported
> +
> +### QEMU backend hotplugging for xl
> +
> +    Status: Supported
> +
> +### Soft-reset for xl
> +
> +    Status: Supported
> +
> +### Virtual cpu hotplug
> +
> +    Status, ARM: Supported

Status: Supported

On x86 is supported for both HVM and PV. HVM can use ACPI, PV uses
xenstore.

> +
> +## Toolstack/3rd party
> +
> +### libvirt driver for xl
> +
> +    Status: Supported, Security support external
> +
> +Security support for libvirt is provided by the libvirt project.
> +See https://libvirt.org/securityprocess.html
> +
> +## Tooling
> +
> +### gdbsx
> +
> +    Status, x86: Supported
> +
> +Debugger to debug ELF guests
> +
> +### vPMU
> +
> +    Status, x86: Supported, Not security supported
> +
> +Virtual Performance Management Unit for HVM guests
> +
> +Disabled by default (enable with hypervisor command line option).
> +This feature is not security supported: see 
> http://xenbits.xen.org/xsa/advisory-163.html
> +
> +### Guest serial sonsole
> +
> +    Status: Supported
> +
> +Logs key hypervisor and Dom0 kernel events to a file

What's "Guest serial console"? Is it xenconsoled? Does it log Dom0
kernel events?

> +
> +### xentrace
> +
> +    Status, x86: Supported
> +
> +Tool to capture Xen trace buffer data
> +
> +### gcov
> +
> +    Status: Supported, Not security supported
> +
> +## Memory Management
> +
> +### Memory Ballooning
> +
> +    Status: Supported
> +
> +### Memory Sharing
> +
> +    Status, x86 HVM: Preview
> +    Status, ARM: Preview
> +
> +Allow sharing of identical pages between guests
> +
> +### Memory Paging
> +
> +    Status, x86 HVM: Experimenal
> +
> +Allow pages belonging to guests to be paged to disk
> +
> +### Transcendent Memory
> +
> +    Status: Experimental

Some text here might be nice, although I don't even know myself what's
the purpose of tmem.

[...]
> +### x86/Deliver events to PVHVM guests using Xen event channels
> +
> +    Status: Supported

I'm not really sure of the usefulness of this item. As said above, I
don't think it's possible to create a HVM guest without event
channels, in which case this should be already covered by the HVM
guest type support.

> +
> +### Fair locks (ticket-locks)
> +
> +    Status: Supported
> +
> +[XXX Is this host ticket locks?  Or some sort of guest PV ticket locks?  If 
> the former it doesn't make any sense to call it 'supported' because they're 
> either there or not.]

Isn't that the spinlock implementation used by Xen internally? In any
case, I don't think this should be on the list at all.

> +
> +## High Availability and Fault Tolerance
> +
> +### Live Migration, Save & Restore
> +
> +    Status, x86: Supported
> +
> +### Remus Fault Tolerance
> +
> +    Status: Experimental
> +
> +### COLO Manager
> +
> +    Status: Experimental
> +
> +### vMCE
> +
> +    Status, x86: Supported
> +
> +Forward Machine Check Exceptions to Appropriate guests
> +
> +## Virtual driver support, guest side
> +
> +### Blkfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external

Status, NetBSD: Supported, Security support external

> +    Status, Windows: Supported [XXX]
> +
> +Guest-side driver capable of speaking the Xen PV block protocol

It feels kind of silly to list code that's not part of our project, I
understand this is done because Linux lacks a security process and we
are nice people, but IMHO this should be managed by the security team
of each external project (or live with the fact that there's none).

> +### Netfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external

Status, NetBSD: Supported, Security support external
Status, OpenBSD: Supported, Security support external

> +    States, Windows: Supported [XXX]
> +
> +Guest-side driver capable of speaking the Xen PV networking protocol

https://www.freebsd.org/security/
http://www.netbsd.org/support/security/
https://www.openbsd.org/security.html

> +
> +### Xen Framebuffer
> +
> +    Status, Linux (xen-fbfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
> +
> +[XXX FreeBSD? NetBSD?]

I don't think so.

> +
> +### Xen Console
> +
> +    Status, Linux (hvc_xen): Supported
> +
> +Guest-side driver capable of speaking the Xen PV console protocol
> +
> +[XXX FreeBSD? NetBSD? Windows?]

Status NetBSD, FreeBSD: Supported, Security support external

[...]
> +Host-side implementaiton of the Xen PV framebuffer protocol
> +
> +### Xen Console
> +
> +    Status, Linux: Supported

There's no Linux host side (backend) of the PV console, it's
xenconsoled. It should be:

Status: Supported

IMHO.

> +    Status, QEMU: Supported
> +
> +Host-side implementation of the Xen PV console protocol
> +
> +### Xen PV keyboard
> +
> +    Status, Linux: Supported

Is there a Linux backend for this? I though the only backend was in
QEMU.

> +    Status, QEMU: Supported
> +
> +Host-side implementation fo the Xen PV keyboard protocol
> +
> +### Xen PV USB
> +
> +    Status, Linux: Experimental
> +    Status, QEMU: Supported

Not sure about this either, do we consider both the PV backend and the
QEMU emulation? Is the USB PV backend inside of Linux?

> +
> +Host-side implementation of the Xen PV USB protocol
> +
> +### Xen PV SCSI protocol
> +
> +    Status, Linux: [XXX]
> +
> +### Xen PV TPM
> +
> +    Status, Linux: Supported

Again this backend runs in user-space IIRC, which means it's not Linux
specific.

> +
> +### Xen 9pfs
> +
> +    Status, QEMU: Preview
> +
> +### PVCalls
> +
> +    Status, Linux: Preview
> +
> +### Online resize of virtual disks
> +
> +    Status: Supported

That pretty much depends on where you are actually storing your disks
I guess. I'm not sure we want to make such compromises.

> +
> +## Security
> +
> +### Driver Domains
> +
> +    Status: Supported
> +
> +### Device Model Stub Domains
> +
> +    Status: Supported, with caveats
> +
> +Vulnerabilities of a device model stub domain to a hostile driver domain are 
> excluded from security support.
> +
> +### KCONFIG Expert
> +
> +    Status: Experimental
> +
> +### Live Patching
> +
> +    Status: Supported, x86 only

Status, x86: Supported
Status, ARM: Preview | Experimental?

Not sure which one is best.

> +
> +Compile time disabled
> +
> +### Virtual Machine Introspection
> +
> +    Status: Supported, x86 only

Status, x86: Supported.

> +
> +### XSM & FLASK
> +
> +    Status: Experimental
> +
> +Compile time disabled
> +
> +### XSM & FLASK support for IS_PRIV
> +
> +    Status: Experimental
> +
> +Compile time disabled
> +
> +### vTPM Support
> +
> +    Status: Supported, x86 only

How's that different from the "Xen PV TPM" item above?

> +
> +### Intel/TXT ???
> +
> +    Status: ???
> +
> +TXT-based integrity system for the Linux kernel and Xen hypervisor
> +
> +[XXX]
> +
> +## Hardware
> +
> +### x86/Nested Virtualization
> +
> +    Status: Experimental

Status, x86: Experimental.

> +
> +Running a hypervisor inside an HVM guest

I would write that as: "Providing hardware virtualization extensions
to HVM guests."

> +
> +### x86/HVM iPXE
> +
> +    Status: Supported, with caveats
> +
> +Booting a guest via PXE.
> +PXE inherently places full trust of the guest in the network,
> +and so should only be used
> +when the guest network is under the same administrative control
> +as the guest itself.

Hm, not sure why this needs to be spelled out, it's just like running
any bootloader/firmware inside a HVM guest, which I'm quite sure we
are not going to list here.

Ie: I don't see us listing OVMF, SeaBIOS or ROMBIOS, simply because
they run inside the guest, so if they are able to cause security
issues, anything else is also capable of causing them.

> +
> +### x86/Physical CPU Hotplug
> +
> +    Status: Supported
> +
> +### x86/Physical Memory Hotplug
> +
> +    Status: Supported
> +
> +### x86/PCI Passthrough PV
> +
> +    Status: Supported, Not security supported
> +
> +PV passthrough cannot be done safely.
> +
> +[XXX Not even with an IOMMU?]
> +
> +### x86/PCI Passthrough HVM
> +
> +    Status: Supported, with caveats
> +
> +Many hardware device and motherboard combinations are not possible to use 
> safely.
> +The XenProject will support bugs in PCI passthrough for Xen,
> +but the user is responsible to ensure that the hardware combination they use
> +is sufficiently secure for their needs,
> +and should assume that any combination is insecure
> +unless they have reason to believe otherwise.
> +
> +### ARM/Non-PCI device passthrough
> +
> +    Status: Supported

I guess non-pci devices on ARM also use the IOMMU? (SMMU)

> +
> +### x86/Advanced Vector eXtension
> +
> +    Status: Supported
> +
> +### Intel Platform QoS Technologies
> +
> +    Status: Preview
> +
> +### ARM/ACPI (host)
> +
> +    Status: Experimental
> +
> +### ARM/SMMU
> +
> +    Status: Supported, with caveats
> +
> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.

I'm not sure of the purpose of this sentence, it's quite clear that
the SMMU is only supported if available. Also, I'm not sure this
should be spelled out in this document, x86 doesn't have a VT-d or SVM
section.

> +
> +### ARM/ITS
> +
> +    Status: experimental
> +
> +[XXX What is this?]
> +
> +### ARM: 16K and 64K pages in guests

Newline

> +    Status: Supported, with caveats
> +
> +No support for QEMU backends in a 16K or 64K domain.
> +

Extra newline.

> +
> +# Format and definitions
> +
> +This file contains prose, and machine-readable fragments.
> +The data in a machine-readable fragment relate to
> +the section and subection in which it is fine.
                                         ^ belongs?

> +
> +The file is in markdown format.
> +The machine-readable fragments are markdown literals
> +containing RFC-822-like (deb822-like) data.
> +
> +## Keys found in the Feature Support subsections
> +
> +### Status
> +
> +This gives the overall status of the feature,
> +including security support status, functional completeness, etc.
> +Refer to the detailed definitions below.
> +
> +If support differs based on implementation
> +(for instance, x86 / ARM, Linux / QEMU / FreeBSD),
> +one line for each set of implementations will be listed.
> +
> +### Restrictions
> +
> +This is a summary of any restrictions which apply,
> +particularly to functional or security support.
> +
> +Full details of restrictions may be provided in the prose
> +section of the feature entry,
> +if a Restrictions tag is present.

Formatting seems weird IMHO.

> +
> +### Limit-Security
> +
> +For size limits.
> +This figure shows the largest configuration which will receive
> +security support.
> +This does not mean that such a configuration will actually work.
> +This limit will only be listed explicitly
> +if it is different than the theoretical limit.

There's no usage of this at all in the document I think.

> +
> +### Limit
> +
> +This figure shows a theoretical size limit.
> +This does not mean that such a large configuration will actually work.

That doesn't make us look specially good, but anyway.

[...]
> +### Security supported
> +
> +Will XSAs be issued if security-related bugs are discovered
> +in the functionality?
> +
> +If "no",
> +anyone who finds a security-related bug in the feature
> +will be advised to
> +post it publicly to the Xen Project mailing lists
> +(or contact another security response team,
> +if a relevant one exists).
> +
> +Bugs found after the end of **Security-Support-Until**
> +in the Release Support section will receive an XSA
> +if they also affect newer, security-supported, versions of Xen.
> +However,
> +the Xen Project will not provide official fixes
> +for non-security-supported versions.

Again weird formatting above (also elsewhere).

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.