[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 for 4.5] xen/arm: Add support for GICv3 for domU



Hi Konrad,

On 05/11/2014 20:12, Konrad Rzeszutek Wilk wrote:
On Wed, Nov 05, 2014 at 01:04:22PM +0000, Julien Grall wrote:
The vGIC will emulate the same version as the hardware. The toolstack has
to retrieve the version of the vGIC in order to be able to create the
corresponding device tree node.

A new DOMCTL has been introduced for ARM to configure the domain. For now
it only allow the toolstack to retrieve the version of vGIC.
This DOMCTL will be extend later to let the user choose the version of the
emulated GIC.

Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>

---
Stefano, I made one change in the guest memory layout, so I've dropped you
Reviewed-by.

     Changes in v3:
         - Typo and update some comments
         - Coding style
         - Only reserve space for an 8 VCPUs redistributor in the guest
         memory layout

     Changes in v2:
         - Use memcpu in xc_domain_configure
         - Rename the DOMCTL into domctl_arm_configuredomain
         - Drop arch_domain_create_pre. Will be reintroduced in Xen 4.6
         and fold the code in arch_domain_init_hw_description
         - Return -EOPNOTSUPP when the value of gic_version is not
         supported

This patch is based on Vijay's GICv3 guest support patch [1] and my patch to
defer the initialization of the vGIC [2].

Xen 4.5 has already support for the hardware GICv3. While it's possible to
boot and use DOM0, there is no guest support. The first version of this patch
has been sent a couple of months ago, but has never reached unstable for
various reasons (based on deferred series, lake of review at that time...).
Without it, platform with GICv3 support won't be able to boot any guest.

The patch has been reworked to make it acceptable for Xen 4.5. Except the new
DOMCTL to retrieve the GIC version and one check. The code is purely GICv3.

Features such as adding a new config option to let the user choose the GIC
version are deferred to Xen 4.6.

It has been tested on both GICv2/GICv3 platform. And build tested for x86.

[1] http://lists.xen.org/archives/html/xen-devel/2014-10/msg00583.html
[2] https://patches.linaro.org/34664/
---
  tools/flask/policy/policy/modules/xen/xen.if |    2 +-
  tools/libxc/include/xenctrl.h                |    6 ++
  tools/libxc/xc_domain.c                      |   20 ++++++
  tools/libxl/libxl_arm.c                      |   95 ++++++++++++++++++++++++--
  xen/arch/arm/domctl.c                        |   35 ++++++++++
  xen/arch/arm/gic-v3.c                        |   16 ++++-
  xen/include/public/arch-arm.h                |   16 +++++
  xen/include/public/domctl.h                  |   17 +++++
  xen/xsm/flask/hooks.c                        |    3 +
  xen/xsm/flask/policy/access_vectors          |    2 +
  10 files changed, 204 insertions(+), 8 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if 
b/tools/flask/policy/policy/modules/xen/xen.if
index 641c797..fa69c9d 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -49,7 +49,7 @@ define(`create_domain_common', `
                        getdomaininfo hypercall setvcpucontext setextvcpucontext
                        getscheduler getvcpuinfo getvcpuextstate getaddrsize
                        getaffinity setaffinity };
-       allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim 
set_max_evtchn set_vnumainfo get_vnumainfo psr_cmt_op };
+       allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim 
set_max_evtchn set_vnumainfo get_vnumainfo psr_cmt_op configure_domain };
        allow $1 $2:security check_context;
        allow $1 $2:shadow enable;
        allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage 
mmuext_op };
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 564e187..45e282c 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -483,6 +483,12 @@ int xc_domain_create(xc_interface *xch,
                       uint32_t flags,
                       uint32_t *pdomid);

+#if defined(__arm__) || defined(__aarch64__)
+typedef xen_domctl_arm_configuredomain_t xc_domain_configuration_t;
+
+int xc_domain_configure(xc_interface *xch, uint32_t domid,
+                        xc_domain_configuration_t *config);
+#endif

  /* Functions to produce a dump of a given domain
   *  xc_domain_dumpcore - produces a dump to a specified file
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index a9bcd4a..b071dea 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,26 @@ int xc_domain_create(xc_interface *xch,
      return 0;
  }

+#if defined(__arm__) || defined(__aarch64__)
+int xc_domain_configure(xc_interface *xch, uint32_t domid,
+                        xc_domain_configuration_t *config)
+{
+    int rc;
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_arm_configure_domain;
+    domctl.domain = (domid_t)domid;
+    /* xc_domain_configure_t is an alias of xen_domctl_arm_configuredomain */
+    memcpy(&domctl.u.configuredomain, config, sizeof(*config));

Why the memcpy? Why not the bounce framework? (xc_hypercall_bounce_post, etc)?

Because AFAIK xc_hypercall_bounce_* is used in coordination with GUEST_HANDLE.

In this case, we don't have a guest handle and the job is already done in do_domctl.

Also, it's not possible to copy a part of the structure. It would require to unroll the memcpy by assigning one by one every field.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.