[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization Extensions


  • To: xen-devel <xen-devel@xxxxxxxxxxxxx>
  • From: "Eric Trudeau" <etrudeau@xxxxxxxxxxxx>
  • Date: Tue, 9 Jul 2013 17:10:47 +0000
  • Accept-language: en-US
  • Cc: Julien Grall <julien.grall@xxxxxxxxxx>, Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • Delivery-date: Tue, 09 Jul 2013 17:11:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac5x/hIn6nqf5EjJTZqDwdCaQWsF7gAiYrQAAAJe3uAADYW/AAAil8fwAA9S3QAACpI8AP//3YsAgAA0TgD/7gswgA==
  • Thread-topic: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization Extensions

> -----Original Message-----
> From: Julien Grall [mailto:julien.grall@xxxxxxxxxx]
> Sent: Thursday, June 27, 2013 6:41 PM
> To: Eric Trudeau
> Cc: xen-devel; Ian Campbell
> Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
> Extensions
> 
> On 06/27/2013 09:19 PM, Eric Trudeau wrote:
> 
> >> -----Original Message-----
> >> From: Julien Grall [mailto:julien.grall@xxxxxxxxxx]
> >> Sent: Thursday, June 27, 2013 3:34 PM
> >> To: Eric Trudeau
> >> Cc: Ian Campbell; xen-devel
> >> Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
> >> Extensions
> >>
> >> On 06/27/2013 08:01 PM, Eric Trudeau wrote:
> >>
> >>>> -----Original Message-----
> >>>> From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx]
> >>>> Sent: Thursday, June 27, 2013 12:34 PM
> >>>> To: Eric Trudeau
> >>>> Cc: xen-devel
> >>>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> >>>>
> >>>> On Thu, 2013-06-27 at 16:21 +0000, Eric Trudeau wrote:
> >>>>>> -----Original Message-----
> >>>>>> From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx]
> >>>>>> Sent: Wednesday, June 26, 2013 12:45 PM
> >>>>>> To: Eric Trudeau
> >>>>>> Cc: xen-devel
> >>>>>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> >>>>>>
> >>>>
> >>>
> >>>>> I rebased to the XSA-55 commit and now I can create the guest.  I am
> >>>>> able to debug a kernel init panic.
> >>>>
> >>>
> >>> My panic seems related to memory regions in device tree.  I am appended
> >>> my DTB to the kernel zImage.
> >>> How does the memory assigned by XEN for the guest domain get inserted
> >>> into the device tree?
> >>> Does Hypervisor or the toolchain manipulate the appended DTB and modify
> >>> the hypervisor node's reg and irq properties? What about the memory
> node?
> >>
> >>
> >> For the moment, the toolstack isn't not able to parse/modify the guest
> >> DTB. Memory and IRQ properties are hardcoded in the hypervisor and the
> >> toolstack. The different values need to match the following constraints:
> >>   - The memory region start from 0x80000000. The size needs to be the
> >> same in the configuration file and the DTS, otherwise the domain will
> >> crash. I believe the default size is 128MB.
> >>   - IRQ properties are:
> >>       * event channel: 31, except if you have modified the IRQ number in
> >> Xen for dom0;
> >>       * timer: same IRQs number as the dom0 DTS;
> >>   - GIC range: same range as the dom0 DTS.
> >>
> >
> > I changed my DTS to hard code memory at 0x80000000 with size 0x800000.
> > Now, I hit my first I/O access fault.  I tried using iomem attribute in my
> dom1.cfg.
> > Is iomem attribute supported yet on ARM?
> > I see the rangeset iomem_caps being set correctly, but I don't know if it is
> being
> > added to the VTTBR for my guest dom.
> 
> Are you trying to assign a device to your guest? If so, Xen doesn't
> yet support it.
> 
> To map physical memory range in a guest you need to use/implement
> xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping
> which is missing on ARM. To help you, you can read this thread:
> http://lists.xen.org/archives/html/xen-devel/2013-06/msg00870.html
> 

I added support for XEN_DOMCTL_memory_mapping in xen/arch/arm/domctl.c
using xen/arch/x86/domctl.c as a model.  I only implemented add functionality.
I then modified domcreate_launch_dm() to call xc_domain_memory_mapping()
instead of xc_domain_iomem_permission for the iomem regions in the domain
cfg file.

This allowed my kernel which unfortunately has some hard-coded accesses to
device memory to boot up in a DomU guest machine without crashing.
Now, I am looking into how to enable IRQs in my guest domains.
Would I implement xc_domain_bind_pt_irq/XEN_DOMCTL_bind_pt_irq in a similar
way as xc_domain_memory_mapping?  Or will the existing xc_domain_irq_permission
calls work?

What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on ARM?

Thanks,
Eric

------------------------------------------------------------------

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 0c32d0b..4196c0c 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -970,8 +970,9 @@ static void domcreate_launch_dm(libxl__egc *egc, 
libxl__multidev *multidev,
         LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
             domid, io->start, io->start + io->number - 1);

-        ret = xc_domain_iomem_permission(CTX->xch, domid,
-                                          io->start, io->number, 1);
+        ret = xc_domain_memory_mapping(CTX->xch, domid,
+                                       io->start, io->start,
+                                       io->number, 1);
         if (ret < 0) {
             LOGE(ERROR,
                  "failed give dom%d access to iomem range %"PRIx64"-%"PRIx64,
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 851ee40..222aac9 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,11 +10,83 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 #include <public/domctl.h>
+#include <xen/iocap.h>
+#include <xsm/xsm.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>

 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    return -ENOSYS;
+    long ret = 0;
+    bool_t copyback = 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_memory_mapping:
+    {
+        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
+        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
+        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
+        int add = domctl->u.memory_mapping.add_mapping;
+
+        /* removing i/o memory is not implemented yet */
+        if (!add) {
+            ret = -ENOSYS;
+            break;
+        }
+        ret = -EINVAL;
+        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
+             /* x86 checks wrap based on paddr_bits which is not implemented 
on ARM? */
+             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits - PAGE_SHIFT)) || 
*/
+             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
+            break;
+
+        ret = -EPERM;
+        if ( current->domain->domain_id != 0 )
+            break;
+
+        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add);
+        if ( ret )
+            break;
+
+        if ( add )
+        {
+            printk(XENLOG_G_INFO
+                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
+                   d->domain_id, gfn, mfn, nr_mfns);
+
+            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
+            if ( !ret && paging_mode_translate(d) )
+            {
+                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
+                                       (gfn + nr_mfns - 1) << PAGE_SHIFT,
+                                       mfn << PAGE_SHIFT);
+                if ( ret )
+                {
+                    printk(XENLOG_G_WARNING
+                           "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx\n",
+                           d->domain_id, gfn, mfn, nr_mfns);
+                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
+                         is_hardware_domain(current->domain) )
+                        printk(XENLOG_ERR
+                               "memory_map: failed to deny dom%d access to 
[%lx,%lx]\n",
+                               d->domain_id, mfn, mfn + nr_mfns - 1);
+                }
+            }
+        }
+    }
+    break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret = -EFAULT;
+
+    return ret;
 }

 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.