[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v6 00/15] Argo: hypervisor-mediated interdomain communication



Posting an updated version (six) of this series with fixes for people
who are testing. Does not include changes for open discussion items.

Fixes include:

* Compat validation macros:
  - struct fields converted to use "struct form" in their declarations
  - dropped the overrides; using the struct validator rather than field
  - dropped the compat/argo.c file

* Notify op: indicates the "cannot queue a space available notification"
  condition via a flag to the caller instead of erroring the operation;
  enables the queries about other rings in the same op to continue.
  Reordered flags while there: static conditions first, errors last.

* Various fixes:
    - return EFAULT in sendv case of do_argo_op
    - WARN not ERR log level on encountering empty iovs
    - removed redundant bounds check in sendv vs MAX_ARGO_MESSAGE_SIZE
    - tabs for indentation in MAINTAINERS
    - added comment explaining tx_ptr rounding after iovs processing
    - BUILD_BUG_ON check for MAX_RING_SIZE align to PAGE_SIZE
    - use gprintk for error on denied registration of existing ring

Christopher Clark (15):
  argo: Introduce the Kconfig option to govern inclusion of Argo
  argo: introduce the argo_op hypercall boilerplate
  argo: define argo_dprintk for subsystem debugging
  argo: init, destroy and soft-reset, with enable command line opt
  errno: add POSIX error codes EMSGSIZE, ECONNREFUSED to the ABI
  xen/arm: introduce guest_handle_for_field()
  argo: implement the register op
  argo: implement the unregister op
  argo: implement the sendv op; evtchn: expose send_guest_global_virq
  argo: implement the notify op
  xsm, argo: XSM control for argo register
  xsm, argo: XSM control for argo message send operation
  xsm, argo: XSM control for any access to argo by a domain
  xsm, argo: notify: don't describe rings that cannot be sent to
  MAINTAINERS: add new section for Argo and self as maintainer

 MAINTAINERS                                  |    7 +
 docs/misc/xen-command-line.pandoc            |   20 +
 tools/flask/policy/modules/guest_features.te |    7 +
 xen/arch/x86/guest/hypercall_page.S          |    2 +-
 xen/arch/x86/hvm/hypercall.c                 |    3 +
 xen/arch/x86/hypercall.c                     |    3 +
 xen/arch/x86/pv/hypercall.c                  |    3 +
 xen/common/Kconfig                           |   19 +
 xen/common/Makefile                          |    1 +
 xen/common/argo.c                            | 2321 ++++++++++++++++++++++++++
 xen/common/domain.c                          |    9 +
 xen/common/event_channel.c                   |    2 +-
 xen/include/Makefile                         |    1 +
 xen/include/asm-arm/guest_access.h           |    3 +
 xen/include/public/argo.h                    |  283 ++++
 xen/include/public/errno.h                   |    2 +
 xen/include/public/xen.h                     |    4 +-
 xen/include/xen/argo.h                       |   44 +
 xen/include/xen/event.h                      |    7 +
 xen/include/xen/hypercall.h                  |    9 +
 xen/include/xen/sched.h                      |    5 +
 xen/include/xlat.lst                         |    8 +
 xen/include/xsm/dummy.h                      |   25 +
 xen/include/xsm/xsm.h                        |   31 +
 xen/xsm/dummy.c                              |    6 +
 xen/xsm/flask/hooks.c                        |   41 +-
 xen/xsm/flask/policy/access_vectors          |   16 +
 xen/xsm/flask/policy/security_classes        |    1 +
 28 files changed, 2876 insertions(+), 7 deletions(-)
 create mode 100644 xen/common/argo.c
 create mode 100644 xen/include/public/argo.h
 create mode 100644 xen/include/xen/argo.h

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.