[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] Add SUPPORT.md



On 08/31/2017 12:25 PM, Roger Pau Monne wrote:
> On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:
>> Add a machine-readable file to describe what features are in what
>> state of being 'supported', as well as information about how long this
>> release will be supported, and so on.
>>
>> The document should be formatted using "semantic newlines" [1], to make
>> changes easier.
>>
>> Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxx>
>> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>

Thanks for the thorough review!  Some responses...


>> +### x86/PV-on-HVM
> 
> Do we really consider this a guest type? From both Xen and the
> toolstack PoV this is just a HVM guest. What's more, I'm not really
> sure xl/libxl has the right options to create a HVM guest _without_
> exposing any PV interfaces.
> 
> Ie: can a HMV guest without PV timers and PV event channels
> actually be created? Or even without having the MSR to initialize the
> hypercall page.

This document has its sources in the "feature support" page.  "PVHVM" is
a collective term that was used at the time for exposing a number of
individual interfaces to the guest; I think a lot of that work happened
around the 4.2-4.3 timeframe.  And *one* of the goals, if I understand
correctly, is to allow the automatic generation of such a table from the
Xen sources.

It may be that we don't need to mention this as a separate feature
anymore; or it may be that we can categorize this differently somehow --
I'm open to suggestions here.

>> +    Status: Supported
>> +
>> +Fully virtualised guest using PV extensions/drivers for improved performance
>> +
>> +Requires hardware virtualisation support
>> +
>> +### x86/PVH guest
>> +
>> +    Status: Preview
>> +
>> +PVHv2 guest support
>> +
>> +Requires hardware virtualisation support
>> +
>> +### x86/PVH dom0
>               ^ v2
>> +
>> +    Status: Experimental
> 
> The status of this is just "not finished". We need at least the PCI
> emulation series for having a half-functional PVHv2 Dom0.

From the definition of 'Experimental':

    Functional completeness: No
    Functional stability: Here be dragons
    Interface stability: Not stable
    Security supported: No

"Not finished" -> Functional completeness: No -> Experimental.

If there's no way of doing anything with dom0 at all we should probably
just remove it from the list.

>> +PVHv2 domain 0 support
>> +
>> +### ARM guest
>> +
>> +    Status: Supported
>> +
>> +ARM only has one guest type at the moment
>> +
>> +## Limits/Host
>> +
>> +### CPUs
>> +
>> +    Limit, x86: 4095
>> +    Limit, ARM32: 8
>> +    Limit, ARM64: 128
>> +
>> +Note that for x86, very large number of cpus may not work/boot,
>> +but we will still provide security support
>> +
>> +### x86/RAM
>> +
>> +    Limit, x86: 16TiB
>> +    Limit, ARM32: 16GiB
>> +    Limit, ARM64: 5TiB
>> +
>> +[XXX: Andy to suggest what this should say for x86]
>> +
>> +## Limits/Guest
>> +
>> +### Virtual CPUs
>> +
>> +    Limit, x86 PV: 512
>> +    Limit, x86 HVM: 128
> 
> There has already been some discussion about the HVM vCPU limit due to
> other topics, is Xen really compromised on providing security support
> for this case?
> 
> I would very much like to have a host in osstest capable of creating
> such a guest, plus maybe some XTF tests to stress it.

This is just copied from our currently-advertised limits.  Feel free to
propose a different limit.  In fact, this seems like a good place to use
Limit-Security (which you point out below, is defined but not used in
the document as posted).

>> +    Limit, ARM32: 8
>> +    Limit, ARM64: 128
>> +
>> +### x86/PV/Virtual RAM
>        ^ This seems wrong, "Guest RAM" maybe?

Oops -- Yeah, missed that one!

>> +
>> +    Limit, x86 PV: >1TB
> 
>  > 1TB? that seems kind of vaguee.

That's what I was given. :-)  Indeed, we need something more concrete --
I'll let someone who knows better propose something.

>> +    Limit, x86 HVM: 1TB
>> +    Limit, ARM32: 16GiB
>> +    Limit, ARM64: 1TB
>> +
>> +### x86 PV/Event Channels
>> +
>> +    Limit: 131072
>> +
>> +## Toolstack
>> +
>> +### xl
>> +
>> +    Status: Supported
>> +
>> +### Direct-boot kernel image format
>> +
>> +    Supported, x86: bzImage
> 
> This should be:
> 
> Supported, x86: bzImage, ELF
> 
> FreeBSD kernel is just a plain ELF binary that's loaded using
> libelf. It should also be suitable for ARM, but I have no idea whether
> it has been tested on ARM at all.

Ack

> 
>> +    Supported, ARM32: zImage
>> +    Supported, ARM64: Image [XXX - Not sure if this is correct]
>> +
>> +Format which the toolstack accept for direct-boot kernels
>> +
>> +### Qemu based disk backend (qdisk) for xl
>> +
>> +    Status: Supported
>> +
>> +### Open vSwitch integration for xl
>> +
>> +    Status: Supported
>> +
>> +### systemd support for xl
>> +
>> +    Status: Supported
>> +
>> +### JSON support for xl
>> +
>> +    Status: Preview
>> +
>> +### AHCI support for xl
>> +
>> +    Status, x86: Supported
>> +
>> +### ACPI guest
>> +
>> +    Status, ARM: Preview
>        Status: Supported
> 
> HVM guests have been using ACPI for a long time on x86.

You mean 'Status, x86 HVM: Supported', I take it?


>> +### Virtual cpu hotplug
>> +
>> +    Status, ARM: Supported
> 
> Status: Supported
> 
> On x86 is supported for both HVM and PV. HVM can use ACPI, PV uses
> xenstore.

Ack

>> +### Guest serial sonsole
>> +
>> +    Status: Supported
>> +
>> +Logs key hypervisor and Dom0 kernel events to a file
> 
> What's "Guest serial console"? Is it xenconsoled? Does it log Dom0
> kernel events?

Oh -- sorry, I changed the title because I couldn't figure out what it
was supposed to mean, but apparently didn't read the description very
well.  But of course the description is bogus anyway -- host serial
consoles don't log things to a file.

Lars, what was originally meant here?

>> +### Transcendent Memory
>> +
>> +    Status: Experimental
> 
> Some text here might be nice, although I don't even know myself what's
> the purpose of tmem.

Konrad / Boris, do you want to add anything?

I could some up with a short description too.

>> +### Fair locks (ticket-locks)
>> +
>> +    Status: Supported
>> +
>> +[XXX Is this host ticket locks?  Or some sort of guest PV ticket locks?  If 
>> the former it doesn't make any sense to call it 'supported' because they're 
>> either there or not.]
> 
> Isn't that the spinlock implementation used by Xen internally? In any
> case, I don't think this should be on the list at all.

I was tidying up a list I got from Ian, who in turn got it from Lars.
Your interpretation (and your conclusion) seems best to me, but I wanted
to give them an opportunity to say otherwise.

>> +### Blkfront
>> +
>> +    Status, Linux: Supported
>> +    Status, FreeBSD: Supported, Security support external
> 
> Status, NetBSD: Supported, Security support external
> 
>> +    Status, Windows: Supported [XXX]
>> +
>> +Guest-side driver capable of speaking the Xen PV block protocol
> 
> It feels kind of silly to list code that's not part of our project, I
> understand this is done because Linux lacks a security process and we
> are nice people, but IMHO this should be managed by the security team
> of each external project (or live with the fact that there's none).

Well the purpose of this document isn't *only* to say what's security
supported; it's also to help define new feature support, set
expectations for functionality, &c.

Additionally, regarding security:

1. For the most part our project wrote the Linux code, so it make sense
for us to support it

2. Windows is included as well, and that is explicitly a XenProject
subproject.

Maybe we should just have a section that points out that most code is
maintained by the projects that contain it, so we don't have to repeat it?

>> +### Netfront
>> +
>> +    Status, Linux: Supported
>> +    Status, FreeBSD: Supported, Security support external
> 
> Status, NetBSD: Supported, Security support external
> Status, OpenBSD: Supported, Security support external
> 
>> +    States, Windows: Supported [XXX]
>> +
>> +Guest-side driver capable of speaking the Xen PV networking protocol
> 
> https://www.freebsd.org/security/
> http://www.netbsd.org/support/security/
> https://www.openbsd.org/security.html

Ack

>> +### Xen Framebuffer
>> +
>> +    Status, Linux (xen-fbfront): Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
>> +
>> +[XXX FreeBSD? NetBSD?]
> 
> I don't think so.

Thanks

> 
>> +
>> +### Xen Console
>> +
>> +    Status, Linux (hvc_xen): Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV console protocol
>> +
>> +[XXX FreeBSD? NetBSD? Windows?]
> 
> Status NetBSD, FreeBSD: Supported, Security support external
> 
> [...]
>> +Host-side implementaiton of the Xen PV framebuffer protocol
>> +
>> +### Xen Console
>> +
>> +    Status, Linux: Supported
> 
> There's no Linux host side (backend) of the PV console, it's
> xenconsoled. It should be:
> 
> Status: Supported
> 
> IMHO.

What you say makes sense, but I didn't pull the 'QEMU' thing out of
nowhere -- I'm pretty sure that was listed somewhere.  Let me see if I
can dig that out.

>> +    Status, QEMU: Supported
>> +
>> +Host-side implementation of the Xen PV console protocol
>> +
>> +### Xen PV keyboard
>> +
>> +    Status, Linux: Supported
> 
> Is there a Linux backend for this? I though the only backend was in
> QEMU.

Oh, I bet this is where I was getting confused.

>> +### Xen PV USB
>> +
>> +    Status, Linux: Experimental
>> +    Status, QEMU: Supported
> 
> Not sure about this either, do we consider both the PV backend and the
> QEMU emulation? Is the USB PV backend inside of Linux?

There exist patches floating around for Linux PVUSB backend that worked
at some point.

In the case of QEMU, I'm talking specifically about the PVUSB backend
that Juergen implemented (similar to the blkback instance in QEMU).
That was checked in some time ago and I'm pretty sure is being actively
used by SuSE.

>> +### Xen PV TPM
>> +
>> +    Status, Linux: Supported
> 
> Again this backend runs in user-space IIRC, which means it's not Linux
> specific.

Ack

>> +### Online resize of virtual disks
>> +
>> +    Status: Supported
> 
> That pretty much depends on where you are actually storing your disks
> I guess. I'm not sure we want to make such compromises.

What do you mean?

>> +### Live Patching
>> +
>> +    Status: Supported, x86 only
> 
> Status, x86: Supported
> Status, ARM: Preview | Experimental?
> 
> Not sure which one is best.

Ah, missed this one, thanks.

>> +### Virtual Machine Introspection
>> +
>> +    Status: Supported, x86 only
> 
> Status, x86: Supported.

Ack

>> +### vTPM Support
>> +
>> +    Status: Supported, x86 only
> 
> How's that different from the "Xen PV TPM" item above?

Yeah, missed this duplciation.  I'll remove this one.

>> +### Intel/TXT ???
>> +
>> +    Status: ???
>> +
>> +TXT-based integrity system for the Linux kernel and Xen hypervisor
>> +
>> +[XXX]
>> +
>> +## Hardware
>> +
>> +### x86/Nested Virtualization
>> +
>> +    Status: Experimental
> 
> Status, x86: Experimental.

Ack.

>> +
>> +Running a hypervisor inside an HVM guest
> 
> I would write that as: "Providing hardware virtualization extensions
> to HVM guests."

Good catch -- actually we should probably have a separate entry for
Nested PV (which works -- not sure whether we want to support it or not).

>> +### x86/HVM iPXE
>> +
>> +    Status: Supported, with caveats
>> +
>> +Booting a guest via PXE.
>> +PXE inherently places full trust of the guest in the network,
>> +and so should only be used
>> +when the guest network is under the same administrative control
>> +as the guest itself.
> 
> Hm, not sure why this needs to be spelled out, it's just like running
> any bootloader/firmware inside a HVM guest, which I'm quite sure we
> are not going to list here.
> 
> Ie: I don't see us listing OVMF, SeaBIOS or ROMBIOS, simply because
> they run inside the guest, so if they are able to cause security
> issues, anything else is also capable of causing them.

Well iPXE is a feature, so we have to say something about it; and there
was a long discussion at the Summit about whether we should list iPXE as
"security supported", because *by design* it just runs random code that
someone sends it over the network.  But if we say it's not supported, it
makes it sound like we think you shouldn't use it.

Above was the agreed-upon compromise: to say it was supported but warn
people what "supported" means.

>> +### ARM/SMMU
>> +
>> +    Status: Supported, with caveats
>> +
>> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not supported.
> 
> I'm not sure of the purpose of this sentence, it's quite clear that
> the SMMU is only supported if available. Also, I'm not sure this
> should be spelled out in this document, x86 doesn't have a VT-d or SVM
> section.

This sentence means, "An SMMU designed by ARM", as opposed to an SMMU
(or SMMU-like thing) designed by someone other than ARM.  (And yes, I
understand that such things existed before the ARM SMMU came out.)

I think people running ARM systems will understand what the sentence means.

>> +### ARM/ITS
>> +
>> +    Status: experimental
>> +
>> +[XXX What is this?]
>> +
>> +### ARM: 16K and 64K pages in guests
> 
> Newline

Ack

> 
>> +    Status: Supported, with caveats
>> +
>> +No support for QEMU backends in a 16K or 64K domain.
>> +
> 
> Extra newline.

Ack

>> +# Format and definitions
>> +
>> +This file contains prose, and machine-readable fragments.
>> +The data in a machine-readable fragment relate to
>> +the section and subection in which it is fine.
>                                          ^ belongs?

I think this should probably be 'found'.

>> +The file is in markdown format.
>> +The machine-readable fragments are markdown literals
>> +containing RFC-822-like (deb822-like) data.
>> +
>> +## Keys found in the Feature Support subsections
>> +
>> +### Status
>> +
>> +This gives the overall status of the feature,
>> +including security support status, functional completeness, etc.
>> +Refer to the detailed definitions below.
>> +
>> +If support differs based on implementation
>> +(for instance, x86 / ARM, Linux / QEMU / FreeBSD),
>> +one line for each set of implementations will be listed.
>> +
>> +### Restrictions
>> +
>> +This is a summary of any restrictions which apply,
>> +particularly to functional or security support.
>> +
>> +Full details of restrictions may be provided in the prose
>> +section of the feature entry,
>> +if a Restrictions tag is present.
> 
> Formatting seems weird IMHO.

To quote the changelog:

"The document should be formatted using "semantic newlines" [1], to make
changes easier.

"[1] http://rhodesmill.org/brandon/2012/one-sentence-per-line/";

>> +### Limit-Security
>> +
>> +For size limits.
>> +This figure shows the largest configuration which will receive
>> +security support.
>> +This does not mean that such a configuration will actually work.
>> +This limit will only be listed explicitly
>> +if it is different than the theoretical limit.
> 
> There's no usage of this at all in the document I think.

There was, but all the "Limit-Security" options were the same as the
"Limit" options, so they all ended up taken out.  I expect that at least
a handful will make their way into the final document.

Thanks!
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.