[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Xen Rust VirtIO demos work breakdown for Project Stratos



Hi,

The following is a breakdown (as best I can figure) of the work needed
to demonstrate VirtIO backends in Rust on the Xen hypervisor. It
requires work across a number of projects but notably core rust and virtio
enabling in the Xen project (building on the work EPAM has already done)
and the start of enabling rust-vmm crate to work with Xen.

The first demo is a fairly simple toy to exercise the direct hypercall
approach for a unikernel backend. On it's own it isn't super impressive
but hopefully serves as a proof of concept for the idea of having
backends running in a single exception level where latency will be
important.

The second is a much more ambitious bridge between Xen and vhost-user to
allow for re-use of the existing vhost-user backends with the bridge
acting as a proxy for what would usually be a full VMM in the type-2
hypervisor case. With that in mind the rust-vmm work is only aimed at
doing the device emulation and doesn't address the larger question of
how type-1 hypervisors can be integrated into the rust-vmm hypervisor
model.

A quick note about the estimates. They are exceedingly rough guesses
plucked out of the air and I would be grateful for feedback from the
appropriate domain experts on if I'm being overly optimistic or
pessimistic.

The links to the Stratos JIRA should be at least read accessible to all
although they contain the same information as the attached document
(albeit with nicer PNG renderings of my ASCII art ;-). There is a
Stratos sync-up call next Thursday:

  
https://calendar.google.com/event?action=TEMPLATE&tmeid=MWpidm5lbzM5NjlydnAxdWxvc2s4aGI0ZGpfMjAyMTA5MzBUMTUwMDAwWiBjX2o3bmdpMW84cmxvZmtwZWQ0cjVjaDk4bXZnQGc&tmsrc=c_j7ngi1o8rlofkped4r5ch98mvg%40group.calendar.google.com

and I'm sure there will also be discussion in the various projects
(hence the wide CC list). The Stratos calls are open to anyone who wants
to attend and we welcome feedback from all who are interested.

So on with the work breakdown:

                    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
                     STRATOS PLANNING FOR 21 TO 22

                              Alex Bennée
                    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━


Table of Contents
─────────────────

1. Xen Rust Bindings ([STR-51])
.. 1. Upstream an "official" rust crate for Xen ([STR-52])
.. 2. Basic Hypervisor Interactions hypercalls ([STR-53])
.. 3. [#10] Access to XenStore service ([STR-54])
.. 4. VirtIO support hypercalls ([STR-55])
2. Xen Hypervisor Support for Stratos ([STR-56])
.. 1. Stable ABI for foreignmemory mapping to non-dom0 ([STR-57])
.. 2. Tweaks to tooling to launch VirtIO guests
3. rust-vmm support for Xen VirtIO ([STR-59])
.. 1. Make vm-memory Xen aware ([STR-60])
.. 2. Xen IO notification and IRQ injections ([STR-61])
4. Stratos Demos
.. 1. Rust based stubdomain monitor ([STR-62])
.. 2. Xen aware vhost-user master ([STR-63])





1 Xen Rust Bindings ([STR-51])
══════════════════════════════

  There exists a [placeholder repository] with the start of a set of
  x86_64 bindings for Xen and a very basic hello world uni-kernel
  example. This forms the basis of the initial Xen Rust work and will be
  available as a [xen-sys crate] via cargo.


[STR-51] <https://linaro.atlassian.net/browse/STR-51>

[placeholder repository] <https://gitlab.com/cardoe/oxerun.git>

[xen-sys crate] <https://crates.io/crates/xen-sys>

1.1 Upstream an "official" rust crate for Xen ([STR-52])
────────────────────────────────────────────────────────

  To start with we will want an upstream location for future work to be
  based upon. The intention is the crate is independent of the version
  of Xen it runs on (above the baseline version chosen). This will
  entail:

  • ☐ agreeing with upstream the name/location for the source
  • ☐ documenting the rules for the "stable" hypercall ABI
  • ☐ establish an internal interface to elide between ioctl mediated
    and direct hypercalls
  • ☐ ensure the crate is multi-arch and has feature parity for arm64

  As such we expect the implementation to be standalone, i.e. not
  wrapping the existing Xen libraries for mediation. There should be a
  close (1-to-1) mapping between the interfaces in the crate and the
  eventual hypercall made to the hypervisor.

  Estimate: 4w (elapsed likely longer due to discussion)


[STR-52] <https://linaro.atlassian.net/browse/STR-52>


1.2 Basic Hypervisor Interactions hypercalls ([STR-53])
───────────────────────────────────────────────────────

  These are the bare minimum hypercalls implemented as both ioctl and
  direct calls. These allow for a very basic binary to:

  • ☐ console_io - output IO via the Xen console
  • ☐ domctl stub - basic stub for domain control (different API?)
  • ☐ sysctl stub - basic stub for system control (different API?)

  The idea would be this provides enough hypercall interface to query
  the list of domains and output their status via the xen console. There
  is an open question about if the domctl and sysctl hypercalls are way
  to go.

  Estimate: 6w


[STR-53] <https://linaro.atlassian.net/browse/STR-53>


1.3 [#10] Access to XenStore service ([STR-54])
───────────────────────────────────────────────

  This is a shared configuration storage space accessed via either Unix
  sockets (on dom0) or via the Xenbus. This is used to access
  configuration information for the domain.

  Is this needed for a backend though? Can everything just be passed
  direct on the command line?

  Estimate: 4w


[STR-54] <https://linaro.atlassian.net/browse/STR-54>


1.4 VirtIO support hypercalls ([STR-55])
────────────────────────────────────────

  These are the hypercalls that need to be implemented to support a
  VirtIO backend. This includes the ability to map another guests memory
  into the current domains address space, register to receive IOREQ
  events when the guest knocks at the doorbell and inject kicks into the
  guest. The hypercalls we need to support would be:

  • ☐ dmop - device model ops (*_ioreq_server, setirq, nr_vpus)
  • ☐ foreignmemory - map and unmap guest memory

  The DMOP space is larger than what we need for an IOREQ backend so
  I've based it just on what arch/arm/dm.c exports which is the subset
  introduced for EPAM's virtio work.

  Estimate: 12w


[STR-55] <https://linaro.atlassian.net/browse/STR-55>


2 Xen Hypervisor Support for Stratos ([STR-56])
═══════════════════════════════════════════════

  These tasks include tasks needed to support the various different
  deployments of Stratos components in Xen.


[STR-56] <https://linaro.atlassian.net/browse/STR-56>

2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57])
───────────────────────────────────────────────────────────────

  Currently the foreign memory mapping support only works for dom0 due
  to reference counting issues. If we are to support backends running in
  their own domains this will need to get fixed.

  Estimate: 8w


[STR-57] <https://linaro.atlassian.net/browse/STR-57>


2.2 Tweaks to tooling to launch VirtIO guests
─────────────────────────────────────────────

  There might not be too much to do here. The EPAM work already did
  something similar for their PoC for virtio-block. Essentially we need
  to ensure:
  • ☐ DT bindings are passed to the guest for virtio-mmio device
    discovery
  • ☐ Our rust backend can be instantiated before the domU is launched

  This currently assumes the tools and the backend are running in dom0.

  Estimate: 4w


3 rust-vmm support for Xen VirtIO ([STR-59])
════════════════════════════════════════════

  This encompasses the tasks required to get a vhost-user server up and
  running while interfacing to the Xen hypervisor. This will require the
  xen-sys.rs crate for the actual interface to the hypervisor.

  We need to work out how a Xen configuration option would be passed to
  the various bits of rust-vmm when something is being built.


[STR-59] <https://linaro.atlassian.net/browse/STR-59>

3.1 Make vm-memory Xen aware ([STR-60])
───────────────────────────────────────

  The vm-memory crate is the root crate for abstracting access to the
  guests memory. It currently has multiple configuration builds to
  handle difference between mmap on Windows and Unix. Although mmap
  isn't directly exposed the public interfaces support a mmap like
  interface. We would need to:

  • ☐ work out how to expose foreign memory via the vm-memory mechanism

  I'm not sure if this just means implementing the GuestMemory trait for
  a GuestMemoryXen or if we need to present a mmap like interface.

  Estimate: 8w


[STR-60] <https://linaro.atlassian.net/browse/STR-60>


3.2 Xen IO notification and IRQ injections ([STR-61])
─────────────────────────────────────────────────────

  The KVM world provides for ioeventfd (notifications) and irqfd
  (injection) to signal asynchronously between the guest and the
  backend. As far a I can tell this is currently handled inside the
  various VMMs which assume a KVM backend.

  While the vhost-user slave code doesn't see the
  register_ioevent/register_irqfd events it does deal with EventFDs
  throughout the code. Perhaps the best approach here would be to create
  a IOREQ crate that can create EventFD descriptors which can then be
  passed to the slaves to use for notification and injection.

  Otherwise there might be an argument for a new crate that can
  encapsulate this behaviour for both KVM/ioeventd and Xen/IOREQ setups?

  Estimate: 8w?


[STR-61] <https://linaro.atlassian.net/browse/STR-61>


4 Stratos Demos
═══════════════

  These tasks cover the creation of demos that brig together all the
  previous bits of work to demonstrate a new area of capability that has
  been opened up by Stratos work.


4.1 Rust based stubdomain monitor ([STR-62])
────────────────────────────────────────────

  This is a basic demo that is a proof of concept for a unikernel style
  backend written in pure Rust. This work would be a useful precursor
  for things such as the RTOS Dom0 on a safety island ([STR-11]) or as a
  carrier for the virtio-scmi backend.

  The monitor program will periodically poll the state of the other
  domains and echo their status to the Xen console.

  Estimate: 4w

#+name: stub-domain-example
#+begin_src ditaa :cmdline -o :file stub_domain_example.png
                      Dom0                      |        DomU       |      
DomStub   
                                                |                   |           
     
                                                :  /-------------\  :           
     
                                                |  |cPNK         |  |           
     
                                                |  |             |  |           
     
                                                |  |             |  |           
     
        /------------------------------------\  |  |   GuestOS   |  |           
     
        |cPNK                                |  |  |             |  |           
     
  EL0   |   Dom0 Userspace (xl tools, QEMU)  |  |  |             |  |  
/---------------\
        |                                    |  |  |             |  |  |cYEL    
       |
        \------------------------------------/  |  |             |  |  |        
       |
        +------------------------------------+  |  |             |  |  | Rust 
Monitor  |
  EL1   |cA1B        Dom0 Kernel             |  |  |             |  |  |        
       |
        +------------------------------------+  |  \-------------/  |  
\---------------/
  
-------------------------------------------------------------------------------=------------------
        
+-------------------------------------------------------------------------------------+
  EL2   |cC02                              Xen Hypervisor                       
              |
        
+-------------------------------------------------------------------------------------+
#+end_src

[STR-62] <https://linaro.atlassian.net/browse/STR-62>

[STR-11] <https://linaro.atlassian.net/browse/STR-11>


4.2 Xen aware vhost-user master ([STR-63])
──────────────────────────────────────────

  Usually the master side of a vhost-user system is embedded directly in
  the VMM itself. However in a Xen deployment their is no overarching
  VMM but a series of utility programs that query the hypervisor
  directly. The Xen tooling is also responsible for setting up any
  support processes that are responsible for emulating HW for the guest.

  The task aims to bridge the gap between Xen's normal HW emulation path
  (ioreq) and VirtIO's userspace device emulation (vhost-user). The
  process would be started with some information on where the
  virtio-mmio address space is and what the slave binary will be. It
  will then:

  • map the guest into Dom0 userspace and attach to a MemFD
  • register the appropriate memory regions as IOREQ regions with Xen
  • create EventFD channels for the virtio kick notifications (one each
    way)
  • spawn the vhost-user slave process and mediate the notifications and
    kicks between the slave and Xen itself

#+name: xen-vhost-user-master
#+begin_src ditaa :cmdline -o :file xen_vhost_user_master.png

                          Dom0                                            DomU  
                          
                                                          |                     
                          
                                                          |                     
                          
                                                          |                     
                          
                                                          |                     
                          
                                                          |                     
                          
                                                          |                     
                          
  +-------------------+            +-------------------+  |
  |                   |----------->|                   |  |
  |    vhost-user     | vhost-user |    vhost-user     |  :  
/------------------------------------\
  |      slave        |  protocol  |      master       |  |  |                  
                  |
  |    (existing)     |<-----------|      (rust)       |  |  |                  
                  |
  +-------------------+            +-------------------+  |  |                  
                  |
           ^                           ^   |       ^      |  |             
Guest Userspace        |
           |                           |   |       |      |  |                  
                  |
           |                           |   | IOREQ |      |  |                  
                  |       
           |                           |   |       |      |  |                  
                  |       
           v                           v   V       |      |  
\------------------------------------/       
   +---------------------------------------------------+  |  
+------------------------------------+
   |       ^                           ^   | ioctl ^   |  |  |                  
                  |
   |       |   iofd/irqfd eventFD      |   |       |   |  |  |              
Guest Kernel          |
   |       +---------------------------+   |       |   |  |  | +-------------+  
                  |
   |                                       |       |   |  |  | | virtio-dev  |  
                  |
   |                       Host Kernel     V       |   |  |  | +-------------+  
                  |
   +---------------------------------------------------+  |  
+------------------------------------+
                                           |       ^      |      |         ^    
                          
                                           | hyper |             |         |    
                          
      ----------------------=------------- | -=--- | ----=------ | -----=- | 
--------=------------------  
                                           |  call |        Trap |         | 
IRQ                          
                                           V       |             V         |    
                          
            
+-------------------------------------------------------------------------------------+
       
            |                              |       ^             |         ^    
                  |       
            |                              |       +-------------+         |    
                  |       
      EL2   |      Xen Hypervisor          |                               |    
                  |       
            |                              +-------------------------------+    
                  |       
            |                                                                   
                  |       
            
+-------------------------------------------------------------------------------------+
       

#+end_src

[STR-63] <https://linaro.atlassian.net/browse/STR-63>

-- 
Alex Bennée



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.