[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen



On Thu, Jun 20, 2019 at 1:39 AM Paul Durrant <Paul.Durrant@xxxxxxxxxx> wrote:
>
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of 
> > Juergen Gross
> > Sent: 20 June 2019 05:18
> > To: Christopher Clark <christopher.w.clark@xxxxxxxxx>; 
> > xen-devel@xxxxxxxxxxxxxxxxxxxx
> > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; 
> > Konrad Rzeszutek Wilk
> > <konrad.wilk@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Andrew 
> > Cooper
> > <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Rich 
> > Persaud <persaur@xxxxxxxxx>;
> > Ankur Arora <ankur.a.arora@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Julien 
> > Grall
> > <julien.grall@xxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Daniel De Graaf 
> > <dgdegra@xxxxxxxxxxxxx>;
> > Christopher Clark <christopher.clark@xxxxxxxxxx>; Roger Pau Monne 
> > <roger.pau@xxxxxxxxxx>
> > Subject: Re: [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface 
> > for PV drivers on nested Xen
> >
> > On 20.06.19 02:30, Christopher Clark wrote:
> > > This RFC patch series adds a new hypervisor interface to support running
> > > a set of PV front end device drivers within dom0 of a guest Xen running
> > > on Xen.
> > >
> > > A practical deployment scenario is a system running PV guest VMs that use
> > > unmodified Xen PV device drivers, on a guest Xen hypervisor with a dom0
> > > using PV drivers itself, all within a HVM guest of a hosting Xen
> > > hypervisor (eg. from a cloud provider). Multiple PV guest VMs can reside
> > > within a single cloud instance; guests can be live-migrated between
> > > cloud instances that run nested Xen, and virtual machine introspection
> > > of guests can be performed without requiring cloud provider support.
> > >
> > > The name "The Xen Blanket" was given by researchers from IBM and Cornell
> > > when the original work was published at the ACM Eurosys 2012 conference.
> > >      http://www1.unine.ch/eurosys2012/program/conference.html
> > >      https://dl.acm.org/citation.cfm?doid=2168836.2168849
> > > This patch series is a reimplementation of this architecture on modern Xen
> > > by Star Lab.
> > >
> > > A patch to the Linux kernel to add device drivers using this blanket 
> > > interface
> > > is at:
> > >      https://github.com/starlab-io/xenblanket-linux
> > > (This is an example, enabling operation and testing of a Xen Blanket 
> > > nested
> > > system. Further work would be necessary for Linux upstreaming.)
> > > Relevant other current Linux work is occurring here:
> > >      https://lkml.org/lkml/2019/4/8/67
> > >      
> > > https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg00743.html
> > >
> > > thanks,
> > >
> > > Christopher
> > >
> > > Christopher Clark (9):
> > >    x86/guest: code movement to separate Xen detection from guest
> > >      functions
> > >    x86: Introduce Xen detection as separate logic from Xen Guest support.
> > >    x86/nested: add nested_xen_version hypercall
> > >    XSM: Add hook for nested xen version op; revises non-nested version op
> > >    x86/nested, xsm: add nested_memory_op hypercall
> > >    x86/nested, xsm: add nested_hvm_op hypercall
> > >    x86/nested, xsm: add nested_grant_table_op hypercall
> > >    x86/nested, xsm: add nested_event_channel_op hypercall
> > >    x86/nested, xsm: add nested_schedop_shutdown hypercall
> > >
> > >   tools/flask/policy/modules/dom0.te           |  14 +-
> > >   tools/flask/policy/modules/guest_features.te |   5 +-
> > >   tools/flask/policy/modules/xen.te            |   3 +
> > >   tools/flask/policy/policy/initial_sids       |   3 +
> > >   xen/arch/x86/Kconfig                         |  33 +-
> > >   xen/arch/x86/Makefile                        |   2 +-
> > >   xen/arch/x86/apic.c                          |   4 +-
> > >   xen/arch/x86/guest/Makefile                  |   4 +
> > >   xen/arch/x86/guest/hypercall_page.S          |   6 +
> > >   xen/arch/x86/guest/xen-guest.c               | 311 ++++++++++++++++
> > >   xen/arch/x86/guest/xen-nested.c              | 350 +++++++++++++++++++
> > >   xen/arch/x86/guest/xen.c                     | 264 +-------------
> > >   xen/arch/x86/hypercall.c                     |   8 +
> > >   xen/arch/x86/pv/hypercall.c                  |   8 +
> > >   xen/arch/x86/setup.c                         |   3 +
> > >   xen/include/asm-x86/guest/hypercall.h        |   7 +-
> > >   xen/include/asm-x86/guest/xen.h              |  36 +-
> > >   xen/include/public/xen.h                     |   6 +
> > >   xen/include/xen/hypercall.h                  |  33 ++
> > >   xen/include/xsm/dummy.h                      |  48 ++-
> > >   xen/include/xsm/xsm.h                        |  49 +++
> > >   xen/xsm/dummy.c                              |   8 +
> > >   xen/xsm/flask/hooks.c                        | 133 ++++++-
> > >   xen/xsm/flask/policy/access_vectors          |  26 ++
> > >   xen/xsm/flask/policy/initial_sids            |   1 +
> > >   xen/xsm/flask/policy/security_classes        |   1 +
> > >   26 files changed, 1086 insertions(+), 280 deletions(-)
> > >   create mode 100644 xen/arch/x86/guest/xen-guest.c
> > >   create mode 100644 xen/arch/x86/guest/xen-nested.c
> > >
> >
> > I think we should discuss that topic at the Xen developer summit in
> > Chicago. Suddenly there seems to be a rush in nested Xen development
> > and related areas, so syncing the efforts seems to be a good idea.
>
> +1 from me on that...

Excellent -- thanks.

Christopher


>
>   Paul
>
> >
> > Juergen
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxxx
> > https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.